Plug-and-play ISTA converges with kernel denoisers
- URL: http://arxiv.org/abs/2004.03145v2
- Date: Tue, 14 Apr 2020 14:24:53 GMT
- Title: Plug-and-play ISTA converges with kernel denoisers
- Authors: Ruturaj G. Gavaskar and Kunal N. Chaudhury
- Abstract summary: Plug-and-play (blur) method is a recent paradigm for image regularization.
A fundamental question in this regard is the theoretical convergence of the kernels.
- Score: 21.361571421723262
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Plug-and-play (PnP) method is a recent paradigm for image regularization,
where the proximal operator (associated with some given regularizer) in an
iterative algorithm is replaced with a powerful denoiser. Algorithmically, this
involves repeated inversion (of the forward model) and denoising until
convergence. Remarkably, PnP regularization produces promising results for
several restoration applications. However, a fundamental question in this
regard is the theoretical convergence of the PnP iterations, since the
algorithm is not strictly derived from an optimization framework. This question
has been investigated in recent works, but there are still many unresolved
problems. For example, it is not known if convergence can be guaranteed if we
use generic kernel denoisers (e.g. nonlocal means) within the ISTA framework
(PnP-ISTA). We prove that, under reasonable assumptions, fixed-point
convergence of PnP-ISTA is indeed guaranteed for linear inverse problems such
as deblurring, inpainting and superresolution (the assumptions are verifiable
for inpainting). We compare our theoretical findings with existing results,
validate them numerically, and explain their practical relevance.
Related papers
- Convergent plug-and-play with proximal denoiser and unconstrained
regularization parameter [12.006511319607473]
In this work, we present new of convergence for Plug-Play (PGD) algorithms.
Recent research has explored convergence by proofs (DRS)
First, we provide a novel convergence proof for.
DRS that does not impose any restrictions on the regularization.
Second, we examine a relaxed version of the PGD that enhances the accuracy of image restoration.
arXiv Detail & Related papers (2023-11-02T13:18:39Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - On the Contractivity of Plug-and-Play Operators [11.218821754886514]
In noise-and-play regularization, the operator in algorithms such as ISTA and ADM is replaced by a powerfulrr.
This formal substitution works surprisingly well in practice.
In fact,.
has been shown to give state-of-the-art results for various imaging applications.
arXiv Detail & Related papers (2023-09-28T23:58:02Z) - Convergent regularization in inverse problems and linear plug-and-play
denoisers [3.759634359597638]
Plug-and-play () denoising is a popular framework for solving imaging problems using inverse image denoisers.
Not much is known about the properties of the converged solution as the noise level in the measurement vanishes to zero, i.e. whether provably convergent regularization schemes are provably convergent regularization schemes.
We show that with linear denoisers, the implicit regularization of the denoiser to an explicit regularization functional leads to a convergent regularization scheme.
arXiv Detail & Related papers (2023-07-18T17:16:08Z) - A relaxed proximal gradient descent algorithm for convergent
plug-and-play with proximal denoiser [6.2484576862659065]
This paper presents a new convergent Plug-and-fidelity Descent (Play) algorithm.
The algorithm converges for a wider range of regular convexization parameters, thus allowing more accurate restoration of an image.
arXiv Detail & Related papers (2023-01-31T16:11:47Z) - Regret Bounds for Expected Improvement Algorithms in Gaussian Process
Bandit Optimization [63.8557841188626]
The expected improvement (EI) algorithm is one of the most popular strategies for optimization under uncertainty.
We propose a variant of EI with a standard incumbent defined via the GP predictive mean.
We show that our algorithm converges, and achieves a cumulative regret bound of $mathcal O(gamma_TsqrtT)$.
arXiv Detail & Related papers (2022-03-15T13:17:53Z) - Proximal denoiser for convergent plug-and-play optimization with
nonconvex regularization [7.0226402509856225]
Plug-and-Play () methods solve ill proximal-posed inverse problems through algorithms by replacing a neural network operator by a denoising operator.
We show that this denoiser actually correspond to a gradient function.
arXiv Detail & Related papers (2022-01-31T14:05:20Z) - Last-iterate Convergence in Extensive-Form Games [49.31256241275577]
We study last-iterate convergence of optimistic algorithms in sequential games.
We show that all of these algorithms enjoy last-iterate convergence, with some of them even converging exponentially fast.
arXiv Detail & Related papers (2021-06-27T22:02:26Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - Lagrangian Decomposition for Neural Network Verification [148.0448557991349]
A fundamental component of neural network verification is the computation of bounds on the values their outputs can take.
We propose a novel approach based on Lagrangian Decomposition.
We show that we obtain bounds comparable with off-the-shelf solvers in a fraction of their running time.
arXiv Detail & Related papers (2020-02-24T17:55:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.