Convergent plug-and-play with proximal denoiser and unconstrained
regularization parameter
- URL: http://arxiv.org/abs/2311.01216v1
- Date: Thu, 2 Nov 2023 13:18:39 GMT
- Title: Convergent plug-and-play with proximal denoiser and unconstrained
regularization parameter
- Authors: Samuel Hurault, Antonin Chambolle, Arthur Leclaire, Nicolas Papadakis
- Abstract summary: In this work, we present new of convergence for Plug-Play (PGD) algorithms.
Recent research has explored convergence by proofs (DRS)
First, we provide a novel convergence proof for.
DRS that does not impose any restrictions on the regularization.
Second, we examine a relaxed version of the PGD that enhances the accuracy of image restoration.
- Score: 12.006511319607473
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we present new proofs of convergence for Plug-and-Play (PnP)
algorithms. PnP methods are efficient iterative algorithms for solving image
inverse problems where regularization is performed by plugging a pre-trained
denoiser in a proximal algorithm, such as Proximal Gradient Descent (PGD) or
Douglas-Rachford Splitting (DRS). Recent research has explored convergence by
incorporating a denoiser that writes exactly as a proximal operator. However,
the corresponding PnP algorithm has then to be run with stepsize equal to $1$.
The stepsize condition for nonconvex convergence of the proximal algorithm in
use then translates to restrictive conditions on the regularization parameter
of the inverse problem. This can severely degrade the restoration capacity of
the algorithm. In this paper, we present two remedies for this limitation.
First, we provide a novel convergence proof for PnP-DRS that does not impose
any restrictions on the regularization parameter. Second, we examine a relaxed
version of the PGD algorithm that converges across a broader range of
regularization parameters. Our experimental study, conducted on deblurring and
super-resolution experiments, demonstrate that both of these solutions enhance
the accuracy of image restoration.
Related papers
- On the convergence of adaptive first order methods: proximal gradient and alternating minimization algorithms [4.307128674848627]
AdaPG$q,r$ is a framework that unifies and extends existing results by providing larger stepsize policies and improved lower bounds.
Different choices of the parameters $q$ and $r$ are discussed and the efficacy of the resulting methods is demonstrated through numerical simulations.
arXiv Detail & Related papers (2023-11-30T10:29:43Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - A relaxed proximal gradient descent algorithm for convergent
plug-and-play with proximal denoiser [6.2484576862659065]
This paper presents a new convergent Plug-and-fidelity Descent (Play) algorithm.
The algorithm converges for a wider range of regular convexization parameters, thus allowing more accurate restoration of an image.
arXiv Detail & Related papers (2023-01-31T16:11:47Z) - Proximal denoiser for convergent plug-and-play optimization with
nonconvex regularization [7.0226402509856225]
Plug-and-Play () methods solve ill proximal-posed inverse problems through algorithms by replacing a neural network operator by a denoising operator.
We show that this denoiser actually correspond to a gradient function.
arXiv Detail & Related papers (2022-01-31T14:05:20Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - Variance-Reduced Off-Policy TDC Learning: Non-Asymptotic Convergence
Analysis [27.679514676804057]
We develop a variance reduction scheme for the two time-scale TDC algorithm in the off-policy setting.
Experiments demonstrate that the proposed variance-reduced TDC achieves a smaller convergence error than both the conventional TDC and the variance-reduced TD.
arXiv Detail & Related papers (2020-10-26T01:33:05Z) - ROOT-SGD: Sharp Nonasymptotics and Near-Optimal Asymptotics in a Single Algorithm [71.13558000599839]
We study the problem of solving strongly convex and smooth unconstrained optimization problems using first-order algorithms.
We devise a novel, referred to as Recursive One-Over-T SGD, based on an easily implementable, averaging of past gradients.
We prove that it simultaneously achieves state-of-the-art performance in both a finite-sample, nonasymptotic sense and an sense.
arXiv Detail & Related papers (2020-08-28T14:46:56Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - On the Almost Sure Convergence of Stochastic Gradient Descent in
Non-Convex Problems [75.58134963501094]
This paper analyzes the trajectories of gradient descent (SGD)
We show that SGD avoids saddle points/manifolds with $1$ for strict step-size policies.
arXiv Detail & Related papers (2020-06-19T14:11:26Z) - Plug-and-play ISTA converges with kernel denoisers [21.361571421723262]
Plug-and-play (blur) method is a recent paradigm for image regularization.
A fundamental question in this regard is the theoretical convergence of the kernels.
arXiv Detail & Related papers (2020-04-07T06:25:34Z) - Lagrangian Decomposition for Neural Network Verification [148.0448557991349]
A fundamental component of neural network verification is the computation of bounds on the values their outputs can take.
We propose a novel approach based on Lagrangian Decomposition.
We show that we obtain bounds comparable with off-the-shelf solvers in a fraction of their running time.
arXiv Detail & Related papers (2020-02-24T17:55:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.