Bayesian imaging using Plug & Play priors: when Langevin meets Tweedie
- URL: http://arxiv.org/abs/2103.04715v2
- Date: Tue, 9 Mar 2021 15:01:21 GMT
- Title: Bayesian imaging using Plug & Play priors: when Langevin meets Tweedie
- Authors: R\'emi Laumont, Valentin de Bortoli, Andr\'es Almansa, Julie Delon,
Alain Durmus and Marcelo Pereyra
- Abstract summary: This paper develops theory, methods, and provably convergent algorithms for performing Bayesian inference with priors.
We introduce two algorithms: 1) -ULA (Unadjusted Langevin) Algorithm inference for Monte Carlo sampling and MMSE; and 2) quantitative-SGD (Stochastic Gradient Descent) for inference.
The algorithms are demonstrated on several problems such as image denoisering, inpainting, and denoising, where they are used for point estimation as well as for uncertainty visualisation and regularity.
- Score: 13.476505672245603
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Since the seminal work of Venkatakrishnan et al. (2013), Plug & Play (PnP)
methods have become ubiquitous in Bayesian imaging. These methods derive
Minimum Mean Square Error (MMSE) or Maximum A Posteriori (MAP) estimators for
inverse problems in imaging by combining an explicit likelihood function with a
prior that is implicitly defined by an image denoising algorithm. The PnP
algorithms proposed in the literature mainly differ in the iterative schemes
they use for optimisation or for sampling. In the case of optimisation schemes,
some recent works guarantee the convergence to a fixed point, albeit not
necessarily a MAP estimate. In the case of sampling schemes, to the best of our
knowledge, there is no known proof of convergence. There also remain important
open questions regarding whether the underlying Bayesian models and estimators
are well defined, well-posed, and have the basic regularity properties required
to support these numerical schemes. To address these limitations, this paper
develops theory, methods, and provably convergent algorithms for performing
Bayesian inference with PnP priors. We introduce two algorithms: 1) PnP-ULA
(Unadjusted Langevin Algorithm) for Monte Carlo sampling and MMSE inference;
and 2) PnP-SGD (Stochastic Gradient Descent) for MAP inference. Using recent
results on the quantitative convergence of Markov chains, we establish detailed
convergence guarantees for these two algorithms under realistic assumptions on
the denoising operators used, with special attention to denoisers based on deep
neural networks. We also show that these algorithms approximately target a
decision-theoretically optimal Bayesian model that is well-posed. The proposed
algorithms are demonstrated on several canonical problems such as image
deblurring, inpainting, and denoising, where they are used for point estimation
as well as for uncertainty visualisation and quantification.
Related papers
- Unrolled denoising networks provably learn optimal Bayesian inference [54.79172096306631]
We prove the first rigorous learning guarantees for neural networks based on unrolling approximate message passing (AMP)
For compressed sensing, we prove that when trained on data drawn from a product prior, the layers of the network converge to the same denoisers used in Bayes AMP.
arXiv Detail & Related papers (2024-09-19T17:56:16Z) - Unfolded proximal neural networks for robust image Gaussian denoising [7.018591019975253]
We propose a unified framework to build PNNs for the Gaussian denoising task, based on both the dual-FB and the primal-dual Chambolle-Pock algorithms.
We also show that accelerated versions of these algorithms enable skip connections in the associated NN layers.
arXiv Detail & Related papers (2023-08-06T15:32:16Z) - Poisson-Gaussian Holographic Phase Retrieval with Score-based Image
Prior [19.231581775644617]
We propose a new algorithm called "AWFS" that uses the accelerated Wirtinger flow (AWF) with a score function as generative prior.
We calculate the gradient of the log-likelihood function for PR and determine the Lipschitz constant.
We provide theoretical analysis that establishes a critical-point convergence guarantee for the proposed algorithm.
arXiv Detail & Related papers (2023-05-12T18:08:47Z) - A relaxed proximal gradient descent algorithm for convergent
plug-and-play with proximal denoiser [6.2484576862659065]
This paper presents a new convergent Plug-and-fidelity Descent (Play) algorithm.
The algorithm converges for a wider range of regular convexization parameters, thus allowing more accurate restoration of an image.
arXiv Detail & Related papers (2023-01-31T16:11:47Z) - Regret Bounds for Expected Improvement Algorithms in Gaussian Process
Bandit Optimization [63.8557841188626]
The expected improvement (EI) algorithm is one of the most popular strategies for optimization under uncertainty.
We propose a variant of EI with a standard incumbent defined via the GP predictive mean.
We show that our algorithm converges, and achieves a cumulative regret bound of $mathcal O(gamma_TsqrtT)$.
arXiv Detail & Related papers (2022-03-15T13:17:53Z) - On Maximum-a-Posteriori estimation with Plug & Play priors and
stochastic gradient descent [13.168923974530307]
Methods to solve imaging problems usually combine an explicit data likelihood function with a prior that explicitly expected properties of the solution.
In a departure from explicit modelling, several recent works have proposed and studied the use of implicit priors defined by an image denoising algorithm.
arXiv Detail & Related papers (2022-01-16T20:50:08Z) - Bayesian decision-making under misspecified priors with applications to
meta-learning [64.38020203019013]
Thompson sampling and other sequential decision-making algorithms are popular approaches to tackle explore/exploit trade-offs in contextual bandits.
We show that performance degrades gracefully with misspecified priors.
arXiv Detail & Related papers (2021-07-03T23:17:26Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Parallel Stochastic Mirror Descent for MDPs [72.75921150912556]
We consider the problem of learning the optimal policy for infinite-horizon Markov decision processes (MDPs)
Some variant of Mirror Descent is proposed for convex programming problems with Lipschitz-continuous functionals.
We analyze this algorithm in a general case and obtain an estimate of the convergence rate that does not accumulate errors during the operation of the method.
arXiv Detail & Related papers (2021-02-27T19:28:39Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Scalable Plug-and-Play ADMM with Convergence Guarantees [24.957046830965822]
We propose an incremental variant of the widely used.
ADMM algorithm, making it scalable to large-scale datasets.
We theoretically analyze the convergence algorithm under a set explicit assumptions.
arXiv Detail & Related papers (2020-06-05T04:10:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.