Bregman Plug-and-Play Priors
- URL: http://arxiv.org/abs/2202.02388v1
- Date: Fri, 4 Feb 2022 21:00:55 GMT
- Title: Bregman Plug-and-Play Priors
- Authors: Abdullah H. Al-Shabili, Xiaojian Xu, Ivan Selesnick, and Ulugbek S.
Kamilov
- Abstract summary: We present a theoretical convergence result for.
BPGM and demonstrate our algorithms on general linear inverse problems.
We also present a theoretical convergence result for RED-BPGM and demonstrate our algorithms on linear inverse problems.
- Score: 12.834024732756465
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The past few years have seen a surge of activity around integration of deep
learning networks and optimization algorithms for solving inverse problems.
Recent work on plug-and-play priors (PnP), regularization by denoising (RED),
and deep unfolding has shown the state-of-the-art performance of such
integration in a variety of applications. However, the current paradigm for
designing such algorithms is inherently Euclidean, due to the usage of the
quadratic norm within the projection and proximal operators. We propose to
broaden this perspective by considering a non-Euclidean setting based on the
more general Bregman distance. Our new Bregman Proximal Gradient Method variant
of PnP (PnP-BPGM) and Bregman Steepest Descent variant of RED (RED-BSD) replace
the traditional updates in PnP and RED from the quadratic norms to more general
Bregman distance. We present a theoretical convergence result for PnP-BPGM and
demonstrate the effectiveness of our algorithms on Poisson linear inverse
problems.
Related papers
- Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Provably Efficient UCB-type Algorithms For Learning Predictive State
Representations [55.00359893021461]
The sequential decision-making problem is statistically learnable if it admits a low-rank structure modeled by predictive state representations (PSRs)
This paper proposes the first known UCB-type approach for PSRs, featuring a novel bonus term that upper bounds the total variation distance between the estimated and true models.
In contrast to existing approaches for PSRs, our UCB-type algorithms enjoy computational tractability, last-iterate guaranteed near-optimal policy, and guaranteed model accuracy.
arXiv Detail & Related papers (2023-07-01T18:35:21Z) - Convergent Bregman Plug-and-Play Image Restoration for Poisson Inverse
Problems [8.673558396669806]
Plug-noise-and-Play (Play) methods are efficient iterative algorithms for solving illposed image inverse problems.
We propose two.
algorithms based on the Bregman Score gradient Denoise inverse problems.
arXiv Detail & Related papers (2023-06-06T07:36:47Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Regret Bounds for Expected Improvement Algorithms in Gaussian Process
Bandit Optimization [63.8557841188626]
The expected improvement (EI) algorithm is one of the most popular strategies for optimization under uncertainty.
We propose a variant of EI with a standard incumbent defined via the GP predictive mean.
We show that our algorithm converges, and achieves a cumulative regret bound of $mathcal O(gamma_TsqrtT)$.
arXiv Detail & Related papers (2022-03-15T13:17:53Z) - Bregman Gradient Policy Optimization [97.73041344738117]
We design a Bregman gradient policy optimization for reinforcement learning based on Bregman divergences and momentum techniques.
VR-BGPO reaches the best complexity $tilde(epsilon-3)$ for finding an $epsilon$stationary point only requiring one trajectory at each iteration.
arXiv Detail & Related papers (2021-06-23T01:08:54Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - Bayesian imaging using Plug & Play priors: when Langevin meets Tweedie [13.476505672245603]
This paper develops theory, methods, and provably convergent algorithms for performing Bayesian inference with priors.
We introduce two algorithms: 1) -ULA (Unadjusted Langevin) Algorithm inference for Monte Carlo sampling and MMSE; and 2) quantitative-SGD (Stochastic Gradient Descent) for inference.
The algorithms are demonstrated on several problems such as image denoisering, inpainting, and denoising, where they are used for point estimation as well as for uncertainty visualisation and regularity.
arXiv Detail & Related papers (2021-03-08T12:46:53Z) - No-Regret Algorithms for Time-Varying Bayesian Optimization [0.0]
We adopt the general variation budget model to capture the time-varying environment.
We introduce two GP-UCB type algorithms: R-GP-UCB and SW-GP-UCB, respectively.
Our results not only recover previous linear bandit results when a linear kernel is used, but complement the previous regret analysis of time-varying Gaussian process bandit.
arXiv Detail & Related papers (2021-02-11T22:35:32Z) - Scalable Plug-and-Play ADMM with Convergence Guarantees [24.957046830965822]
We propose an incremental variant of the widely used.
ADMM algorithm, making it scalable to large-scale datasets.
We theoretically analyze the convergence algorithm under a set explicit assumptions.
arXiv Detail & Related papers (2020-06-05T04:10:15Z) - Plug-and-play ISTA converges with kernel denoisers [21.361571421723262]
Plug-and-play (blur) method is a recent paradigm for image regularization.
A fundamental question in this regard is the theoretical convergence of the kernels.
arXiv Detail & Related papers (2020-04-07T06:25:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.