A relaxed proximal gradient descent algorithm for convergent
plug-and-play with proximal denoiser
- URL: http://arxiv.org/abs/2301.13731v2
- Date: Wed, 5 Apr 2023 12:20:25 GMT
- Title: A relaxed proximal gradient descent algorithm for convergent
plug-and-play with proximal denoiser
- Authors: Samuel Hurault, Antonin Chambolle, Arthur Leclaire and Nicolas
Papadakis
- Abstract summary: This paper presents a new convergent Plug-and-fidelity Descent (Play) algorithm.
The algorithm converges for a wider range of regular convexization parameters, thus allowing more accurate restoration of an image.
- Score: 6.2484576862659065
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a new convergent Plug-and-Play (PnP) algorithm. PnP
methods are efficient iterative algorithms for solving image inverse problems
formulated as the minimization of the sum of a data-fidelity term and a
regularization term. PnP methods perform regularization by plugging a
pre-trained denoiser in a proximal algorithm, such as Proximal Gradient Descent
(PGD). To ensure convergence of PnP schemes, many works study specific
parametrizations of deep denoisers. However, existing results require either
unverifiable or suboptimal hypotheses on the denoiser, or assume restrictive
conditions on the parameters of the inverse problem. Observing that these
limitations can be due to the proximal algorithm in use, we study a relaxed
version of the PGD algorithm for minimizing the sum of a convex function and a
weakly convex one. When plugged with a relaxed proximal denoiser, we show that
the proposed PnP-$\alpha$PGD algorithm converges for a wider range of
regularization parameters, thus allowing more accurate image restoration.
Related papers
- Convergent plug-and-play with proximal denoiser and unconstrained
regularization parameter [12.006511319607473]
In this work, we present new of convergence for Plug-Play (PGD) algorithms.
Recent research has explored convergence by proofs (DRS)
First, we provide a novel convergence proof for.
DRS that does not impose any restrictions on the regularization.
Second, we examine a relaxed version of the PGD that enhances the accuracy of image restoration.
arXiv Detail & Related papers (2023-11-02T13:18:39Z) - Convergent Bregman Plug-and-Play Image Restoration for Poisson Inverse
Problems [8.673558396669806]
Plug-noise-and-Play (Play) methods are efficient iterative algorithms for solving illposed image inverse problems.
We propose two.
algorithms based on the Bregman Score gradient Denoise inverse problems.
arXiv Detail & Related papers (2023-06-06T07:36:47Z) - Provably Convergent Plug-and-Play Quasi-Newton Methods [5.9974035827998655]
We propose an efficient method to combine fidelity terms and deep denoisers.
We show that the proposed quasi-Newton algorithm is critical points of a weakly convex function.
Experiments on imageblurring and super-resolution demonstrate faster convergence as compared to other provable deM methods.
arXiv Detail & Related papers (2023-03-09T20:09:15Z) - A Semismooth Newton Stochastic Proximal Point Algorithm with Variance Reduction [2.048226951354646]
We develop an implementable proximal point (SPP) method for a class of weakly convex, composite optimization problems.
The proposed algorithm incorporates a variance reduction mechanism and the resulting updates are solved using an inexact semismooth Newton framework.
arXiv Detail & Related papers (2022-04-01T13:08:49Z) - Proximal denoiser for convergent plug-and-play optimization with
nonconvex regularization [7.0226402509856225]
Plug-and-Play () methods solve ill proximal-posed inverse problems through algorithms by replacing a neural network operator by a denoising operator.
We show that this denoiser actually correspond to a gradient function.
arXiv Detail & Related papers (2022-01-31T14:05:20Z) - Uniform-PAC Bounds for Reinforcement Learning with Linear Function
Approximation [92.3161051419884]
We study reinforcement learning with linear function approximation.
Existing algorithms only have high-probability regret and/or Probably Approximately Correct (PAC) sample complexity guarantees.
We propose a new algorithm called FLUTE, which enjoys uniform-PAC convergence to the optimal policy with high probability.
arXiv Detail & Related papers (2021-06-22T08:48:56Z) - Average-Reward Off-Policy Policy Evaluation with Function Approximation [66.67075551933438]
We consider off-policy policy evaluation with function approximation in average-reward MDPs.
bootstrapping is necessary and, along with off-policy learning and FA, results in the deadly triad.
We propose two novel algorithms, reproducing the celebrated success of Gradient TD algorithms in the average-reward setting.
arXiv Detail & Related papers (2021-01-08T00:43:04Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z) - Exploiting Higher Order Smoothness in Derivative-free Optimization and
Continuous Bandits [99.70167985955352]
We study the problem of zero-order optimization of a strongly convex function.
We consider a randomized approximation of the projected gradient descent algorithm.
Our results imply that the zero-order algorithm is nearly optimal in terms of sample complexity and the problem parameters.
arXiv Detail & Related papers (2020-06-14T10:42:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.