Convergent regularization in inverse problems and linear plug-and-play
denoisers
- URL: http://arxiv.org/abs/2307.09441v1
- Date: Tue, 18 Jul 2023 17:16:08 GMT
- Title: Convergent regularization in inverse problems and linear plug-and-play
denoisers
- Authors: Andreas Hauptmann and Subhadip Mukherjee and Carola-Bibiane
Sch\"onlieb and Ferdia Sherry
- Abstract summary: Plug-and-play () denoising is a popular framework for solving imaging problems using inverse image denoisers.
Not much is known about the properties of the converged solution as the noise level in the measurement vanishes to zero, i.e. whether provably convergent regularization schemes are provably convergent regularization schemes.
We show that with linear denoisers, the implicit regularization of the denoiser to an explicit regularization functional leads to a convergent regularization scheme.
- Score: 3.759634359597638
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Plug-and-play (PnP) denoising is a popular iterative framework for solving
imaging inverse problems using off-the-shelf image denoisers. Their empirical
success has motivated a line of research that seeks to understand the
convergence of PnP iterates under various assumptions on the denoiser. While a
significant amount of research has gone into establishing the convergence of
the PnP iteration for different regularity conditions on the denoisers, not
much is known about the asymptotic properties of the converged solution as the
noise level in the measurement tends to zero, i.e., whether PnP methods are
provably convergent regularization schemes under reasonable assumptions on the
denoiser. This paper serves two purposes: first, we provide an overview of the
classical regularization theory in inverse problems and survey a few notable
recent data-driven methods that are provably convergent regularization schemes.
We then continue to discuss PnP algorithms and their established convergence
guarantees. Subsequently, we consider PnP algorithms with linear denoisers and
propose a novel spectral filtering technique to control the strength of
regularization arising from the denoiser. Further, by relating the implicit
regularization of the denoiser to an explicit regularization functional, we
rigorously show that PnP with linear denoisers leads to a convergent
regularization scheme. More specifically, we prove that in the limit as the
noise vanishes, the PnP reconstruction converges to the minimizer of a
regularization potential subject to the solution satisfying the noiseless
operator equation. The theoretical analysis is corroborated by numerical
experiments for the classical inverse problem of tomographic image
reconstruction.
Related papers
- Gradient Normalization with(out) Clipping Ensures Convergence of Nonconvex SGD under Heavy-Tailed Noise with Improved Results [60.92029979853314]
This paper investigates Gradient Normalization without (NSGDC) its gradient reduction variant (NSGDC-VR)
We present significant improvements in the theoretical results for both algorithms.
arXiv Detail & Related papers (2024-10-21T22:40:42Z) - Plug-and-Play image restoration with Stochastic deNOising REgularization [8.678250057211368]
We propose a new framework called deNOising REgularization (SNORE)
SNORE applies the denoiser only to images with noise of the adequate level.
It is based on an explicit regularization, which leads to a descent to solve inverse problems.
arXiv Detail & Related papers (2024-02-01T18:05:47Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - First Order Methods with Markovian Noise: from Acceleration to Variational Inequalities [91.46841922915418]
We present a unified approach for the theoretical analysis of first-order variation methods.
Our approach covers both non-linear gradient and strongly Monte Carlo problems.
We provide bounds that match the oracle strongly in the case of convex method optimization problems.
arXiv Detail & Related papers (2023-05-25T11:11:31Z) - Block Coordinate Plug-and-Play Methods for Blind Inverse Problems [13.543612162739773]
Plug-and-play prior is a well-known method for solving inverse problems by operators combining physical measurement models and learned image denoisers.
While.
methods have been extensively used for image recovery with known measurement operators, there is little work on.
solving blind inverse problems.
We address this gap by presenting learned denoisers as priors on both unknown operators.
arXiv Detail & Related papers (2023-05-22T03:27:30Z) - A relaxed proximal gradient descent algorithm for convergent
plug-and-play with proximal denoiser [6.2484576862659065]
This paper presents a new convergent Plug-and-fidelity Descent (Play) algorithm.
The algorithm converges for a wider range of regular convexization parameters, thus allowing more accurate restoration of an image.
arXiv Detail & Related papers (2023-01-31T16:11:47Z) - Clipped Stochastic Methods for Variational Inequalities with
Heavy-Tailed Noise [64.85879194013407]
We prove the first high-probability results with logarithmic dependence on the confidence level for methods for solving monotone and structured non-monotone VIPs.
Our results match the best-known ones in the light-tails case and are novel for structured non-monotone problems.
In addition, we numerically validate that the gradient noise of many practical formulations is heavy-tailed and show that clipping improves the performance of SEG/SGDA.
arXiv Detail & Related papers (2022-06-02T15:21:55Z) - Proximal denoiser for convergent plug-and-play optimization with
nonconvex regularization [7.0226402509856225]
Plug-and-Play () methods solve ill proximal-posed inverse problems through algorithms by replacing a neural network operator by a denoising operator.
We show that this denoiser actually correspond to a gradient function.
arXiv Detail & Related papers (2022-01-31T14:05:20Z) - On the Convergence of Stochastic Extragradient for Bilinear Games with
Restarted Iteration Averaging [96.13485146617322]
We present an analysis of the ExtraGradient (SEG) method with constant step size, and present variations of the method that yield favorable convergence.
We prove that when augmented with averaging, SEG provably converges to the Nash equilibrium, and such a rate is provably accelerated by incorporating a scheduled restarting procedure.
arXiv Detail & Related papers (2021-06-30T17:51:36Z) - Plug-and-play ISTA converges with kernel denoisers [21.361571421723262]
Plug-and-play (blur) method is a recent paradigm for image regularization.
A fundamental question in this regard is the theoretical convergence of the kernels.
arXiv Detail & Related papers (2020-04-07T06:25:34Z) - Solving Inverse Problems with a Flow-based Noise Model [100.18560761392692]
We study image inverse problems with a normalizing flow prior.
Our formulation views the solution as the maximum a posteriori estimate of the image conditioned on the measurements.
We empirically validate the efficacy of our method on various inverse problems, including compressed sensing with quantized measurements and denoising with highly structured noise patterns.
arXiv Detail & Related papers (2020-03-18T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.