Fixed-Point and Objective Convergence of Plug-and-Play Algorithms
- URL: http://arxiv.org/abs/2104.10348v1
- Date: Wed, 21 Apr 2021 04:25:17 GMT
- Title: Fixed-Point and Objective Convergence of Plug-and-Play Algorithms
- Authors: Pravin Nair and Ruturaj G. Gavaskar and Kunal N. Chaudhury
- Abstract summary: A standard model for image reconstruction involves the reconstruction of a data-fidelity novelty term along with a regularizer.
In this paper, we establish both forms of convergence for a special proximal linear denoisers.
We work with a special inner product (and norm) derived from the linear denoiser.
- Score: 25.65350839936094
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A standard model for image reconstruction involves the minimization of a
data-fidelity term along with a regularizer, where the optimization is
performed using proximal algorithms such as ISTA and ADMM. In plug-and-play
(PnP) regularization, the proximal operator (associated with the regularizer)
in ISTA and ADMM is replaced by a powerful image denoiser. Although PnP
regularization works surprisingly well in practice, its theoretical convergence
-- whether convergence of the PnP iterates is guaranteed and if they minimize
some objective function -- is not completely understood even for simple linear
denoisers such as nonlocal means. In particular, while there are works where
either iterate or objective convergence is established separately, a
simultaneous guarantee on iterate and objective convergence is not available
for any denoiser to our knowledge. In this paper, we establish both forms of
convergence for a special class of linear denoisers. Notably, unlike existing
works where the focus is on symmetric denoisers, our analysis covers
non-symmetric denoisers such as nonlocal means and almost any convex
data-fidelity. The novelty in this regard is that we make use of the
convergence theory of averaged operators and we work with a special inner
product (and norm) derived from the linear denoiser; the latter requires us to
appropriately define the gradient and proximal operators associated with the
data-fidelity term. We validate our convergence results using image
reconstruction experiments.
Related papers
- You KAN Do It in a Single Shot: Plug-and-Play Methods with Single-Instance Priors [10.726369475010818]
We introduce KAN-Play, an optimisation framework that incorporates Kologorov- Networks (KANs) as denoisers.
KAN-Play is specifically designed to solve problems with single-instance inverse priors, where only a single noisy observation is available.
arXiv Detail & Related papers (2024-12-09T04:55:18Z) - Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum [56.37522020675243]
We provide the first proof of convergence for normalized error feedback algorithms across a wide range of machine learning problems.
We show that due to their larger allowable stepsizes, our new normalized error feedback algorithms outperform their non-normalized counterparts on various tasks.
arXiv Detail & Related papers (2024-10-22T10:19:27Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Robust Low-Rank Matrix Completion via a New Sparsity-Inducing
Regularizer [30.920908325825668]
This paper presents a novel loss function to as hybrid ordinary-Welsch (HOW) and a new sparsity-inducing matrix problem solver.
arXiv Detail & Related papers (2023-10-07T09:47:55Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - Proximal denoiser for convergent plug-and-play optimization with
nonconvex regularization [7.0226402509856225]
Plug-and-Play () methods solve ill proximal-posed inverse problems through algorithms by replacing a neural network operator by a denoising operator.
We show that this denoiser actually correspond to a gradient function.
arXiv Detail & Related papers (2022-01-31T14:05:20Z) - Efficient Methods for Structured Nonconvex-Nonconcave Min-Max
Optimization [98.0595480384208]
We propose a generalization extraient spaces which converges to a stationary point.
The algorithm applies not only to general $p$-normed spaces, but also to general $p$-dimensional vector spaces.
arXiv Detail & Related papers (2020-10-31T21:35:42Z) - Plug-and-play ISTA converges with kernel denoisers [21.361571421723262]
Plug-and-play (blur) method is a recent paradigm for image regularization.
A fundamental question in this regard is the theoretical convergence of the kernels.
arXiv Detail & Related papers (2020-04-07T06:25:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.