Provably Convergent Plug-and-Play Quasi-Newton Methods
- URL: http://arxiv.org/abs/2303.07271v4
- Date: Mon, 13 Nov 2023 11:58:00 GMT
- Title: Provably Convergent Plug-and-Play Quasi-Newton Methods
- Authors: Hong Ye Tan, Subhadip Mukherjee, Junqi Tang, Carola-Bibiane
Sch\"onlieb
- Abstract summary: We propose an efficient method to combine fidelity terms and deep denoisers.
We show that the proposed quasi-Newton algorithm is critical points of a weakly convex function.
Experiments on imageblurring and super-resolution demonstrate faster convergence as compared to other provable deM methods.
- Score: 5.9974035827998655
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Plug-and-Play (PnP) methods are a class of efficient iterative methods that
aim to combine data fidelity terms and deep denoisers using classical
optimization algorithms, such as ISTA or ADMM, with applications in inverse
problems and imaging. Provable PnP methods are a subclass of PnP methods with
convergence guarantees, such as fixed point convergence or convergence to
critical points of some energy function. Many existing provable PnP methods
impose heavy restrictions on the denoiser or fidelity function, such as
non-expansiveness or strict convexity, respectively. In this work, we propose a
novel algorithmic approach incorporating quasi-Newton steps into a provable PnP
framework based on proximal denoisers, resulting in greatly accelerated
convergence while retaining light assumptions on the denoiser. By
characterizing the denoiser as the proximal operator of a weakly convex
function, we show that the fixed points of the proposed quasi-Newton PnP
algorithm are critical points of a weakly convex function. Numerical
experiments on image deblurring and super-resolution demonstrate 2--8x faster
convergence as compared to other provable PnP methods with similar
reconstruction quality.
Related papers
- A Unified Plug-and-Play Algorithm with Projected Landweber Operator for Split Convex Feasibility Problems [6.185478918618347]
In recent years Plug-and-Play methods have achieved state-resolution-of-the-art performance in inverse imaging problems by replacing operators with denoisers.
Applying methods with theoretically guaranteed step sizes is difficult, and algorithms are limited to noise.
Project Landweber Operator (PLOPLO) is proposed to address these issues.
arXiv Detail & Related papers (2024-08-22T03:29:51Z) - A Unified Theory of Stochastic Proximal Point Methods without Smoothness [52.30944052987393]
Proximal point methods have attracted considerable interest owing to their numerical stability and robustness against imperfect tuning.
This paper presents a comprehensive analysis of a broad range of variations of the proximal point method (SPPM)
arXiv Detail & Related papers (2024-05-24T21:09:19Z) - Incremental Quasi-Newton Methods with Faster Superlinear Convergence
Rates [50.36933471975506]
We consider the finite-sum optimization problem, where each component function is strongly convex and has Lipschitz continuous gradient and Hessian.
The recently proposed incremental quasi-Newton method is based on BFGS update and achieves a local superlinear convergence rate.
This paper proposes a more efficient quasi-Newton method by incorporating the symmetric rank-1 update into the incremental framework.
arXiv Detail & Related papers (2024-02-04T05:54:51Z) - High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise [96.80184504268593]
gradient, clipping is one of the key algorithmic ingredients to derive good high-probability guarantees.
Clipping can spoil the convergence of the popular methods for composite and distributed optimization.
arXiv Detail & Related papers (2023-10-03T07:49:17Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Block Coordinate Plug-and-Play Methods for Blind Inverse Problems [13.543612162739773]
Plug-and-play prior is a well-known method for solving inverse problems by operators combining physical measurement models and learned image denoisers.
While.
methods have been extensively used for image recovery with known measurement operators, there is little work on.
solving blind inverse problems.
We address this gap by presenting learned denoisers as priors on both unknown operators.
arXiv Detail & Related papers (2023-05-22T03:27:30Z) - A relaxed proximal gradient descent algorithm for convergent
plug-and-play with proximal denoiser [6.2484576862659065]
This paper presents a new convergent Plug-and-fidelity Descent (Play) algorithm.
The algorithm converges for a wider range of regular convexization parameters, thus allowing more accurate restoration of an image.
arXiv Detail & Related papers (2023-01-31T16:11:47Z) - A Semismooth Newton Stochastic Proximal Point Algorithm with Variance Reduction [2.048226951354646]
We develop an implementable proximal point (SPP) method for a class of weakly convex, composite optimization problems.
The proposed algorithm incorporates a variance reduction mechanism and the resulting updates are solved using an inexact semismooth Newton framework.
arXiv Detail & Related papers (2022-04-01T13:08:49Z) - Proximal denoiser for convergent plug-and-play optimization with
nonconvex regularization [7.0226402509856225]
Plug-and-Play () methods solve ill proximal-posed inverse problems through algorithms by replacing a neural network operator by a denoising operator.
We show that this denoiser actually correspond to a gradient function.
arXiv Detail & Related papers (2022-01-31T14:05:20Z) - Fast Objective & Duality Gap Convergence for Non-Convex Strongly-Concave
Min-Max Problems with PL Condition [52.08417569774822]
This paper focuses on methods for solving smooth non-concave min-max problems, which have received increasing attention due to deep learning (e.g., deep AUC)
arXiv Detail & Related papers (2020-06-12T00:32:21Z) - IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method [64.15649345392822]
We introduce a framework for designing primal methods under the decentralized optimization setting where local functions are smooth and strongly convex.
Our approach consists of approximately solving a sequence of sub-problems induced by the accelerated augmented Lagrangian method.
When coupled with accelerated gradient descent, our framework yields a novel primal algorithm whose convergence rate is optimal and matched by recently derived lower bounds.
arXiv Detail & Related papers (2020-06-11T18:49:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.