Regularization by denoising: Bayesian model and Langevin-within-split
Gibbs sampling
- URL: http://arxiv.org/abs/2402.12292v1
- Date: Mon, 19 Feb 2024 17:12:16 GMT
- Title: Regularization by denoising: Bayesian model and Langevin-within-split
Gibbs sampling
- Authors: Elhadji C. Faye, Mame Diarra Fall and Nicolas Dobigeon
- Abstract summary: This paper introduces a Bayesian framework for image inversion by deriving a probabilistic counterpart to the regularization-by-denoising (RED) paradigm.
It implements a Monte Carlo algorithm specifically tailored for sampling from the resulting posterior distribution, based on anally exact data augmentation (AXDA)
The proposed algorithm is an approximate instance of split Gibbs sampling (SGS) which embeds one Langevin Monte Carlo step.
- Score: 6.453497703172228
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces a Bayesian framework for image inversion by deriving a
probabilistic counterpart to the regularization-by-denoising (RED) paradigm. It
additionally implements a Monte Carlo algorithm specifically tailored for
sampling from the resulting posterior distribution, based on an asymptotically
exact data augmentation (AXDA). The proposed algorithm is an approximate
instance of split Gibbs sampling (SGS) which embeds one Langevin Monte Carlo
step. The proposed method is applied to common imaging tasks such as
deblurring, inpainting and super-resolution, demonstrating its efficacy through
extensive numerical experiments. These contributions advance Bayesian inference
in imaging by leveraging data-driven regularization strategies within a
probabilistic framework.
Related papers
- Solving Linear-Gaussian Bayesian Inverse Problems with Decoupled Diffusion Sequential Monte Carlo [11.629137473977888]
We design a sequential Monte Carlo method for linear-Gaussian inverse problems.
We demonstrate the effectiveness of our Decoupled Sequential Monte Carlo (DDSMC) algorithm on both synthetic data and image reconstruction tasks.
arXiv Detail & Related papers (2025-02-10T11:59:02Z) - Arbitrary-steps Image Super-resolution via Diffusion Inversion [68.78628844966019]
This study presents a new image super-resolution (SR) technique based on diffusion inversion, aiming at harnessing the rich image priors encapsulated in large pre-trained diffusion models to improve SR performance.
We design a Partial noise Prediction strategy to construct an intermediate state of the diffusion model, which serves as the starting sampling point.
Once trained, this noise predictor can be used to initialize the sampling process partially along the diffusion trajectory, generating the desirable high-resolution result.
arXiv Detail & Related papers (2024-12-12T07:24:13Z) - Do Bayesian imaging methods report trustworthy probabilities? [0.18434042562191813]
We run a large experiment requiring 1,000 GPU-hours to probe the accuracy of five canonical Bayesian imaging methods.
We find that, a few cases, the probabilities reported by modern Bayesian imaging techniques are in broad agreement with long-term averages.
Existing Bayesian imaging methods are generally not able to deliver reliable uncertainty quantification results.
arXiv Detail & Related papers (2024-05-13T20:57:01Z) - Solving Linear Inverse Problems Provably via Posterior Sampling with
Latent Diffusion Models [98.95988351420334]
We present the first framework to solve linear inverse problems leveraging pre-trained latent diffusion models.
We theoretically analyze our algorithm showing provable sample recovery in a linear model setting.
arXiv Detail & Related papers (2023-07-02T17:21:30Z) - Poisson-Gaussian Holographic Phase Retrieval with Score-based Image
Prior [19.231581775644617]
We propose a new algorithm called "AWFS" that uses the accelerated Wirtinger flow (AWF) with a score function as generative prior.
We calculate the gradient of the log-likelihood function for PR and determine the Lipschitz constant.
We provide theoretical analysis that establishes a critical-point convergence guarantee for the proposed algorithm.
arXiv Detail & Related papers (2023-05-12T18:08:47Z) - Plug-and-Play split Gibbs sampler: embedding deep generative priors in
Bayesian inference [12.91637880428221]
This paper introduces a plug-and-play sampling algorithm that leverages variable splitting to efficiently sample from a posterior distribution.
It divides the challenging task of posterior sampling into two simpler sampling problems.
Its performance is compared to recent state-of-the-art optimization and sampling methods.
arXiv Detail & Related papers (2023-04-21T17:17:51Z) - Langevin Monte Carlo for Contextual Bandits [72.00524614312002]
Langevin Monte Carlo Thompson Sampling (LMC-TS) is proposed to directly sample from the posterior distribution in contextual bandits.
We prove that the proposed algorithm achieves the same sublinear regret bound as the best Thompson sampling algorithms for a special case of contextual bandits.
arXiv Detail & Related papers (2022-06-22T17:58:23Z) - Patch-Based Image Restoration using Expectation Propagation [7.7731951589289565]
Monte Carlo techniques can suffer from scalability issues in high-dimensional inference problems such as image restoration.
EP is used here to approximate the posterior distributions using products of multivariate Gaussian densities.
Experiments conducted for denoising, inpainting and deconvolution problems with Gaussian and Poisson noise illustrate the potential benefits of such flexible approximate Bayesian method.
arXiv Detail & Related papers (2021-06-18T10:45:15Z) - Analysis and Design of Thompson Sampling for Stochastic Partial
Monitoring [91.22679787578438]
We present a novel Thompson-sampling-based algorithm for partial monitoring.
We prove that the new algorithm achieves the logarithmic problem-dependent expected pseudo-regret $mathrmO(log T)$ for a linearized variant of the problem with local observability.
arXiv Detail & Related papers (2020-06-17T05:48:33Z) - Efficiently Sampling Functions from Gaussian Process Posteriors [76.94808614373609]
We propose an easy-to-use and general-purpose approach for fast posterior sampling.
We demonstrate how decoupled sample paths accurately represent Gaussian process posteriors at a fraction of the usual cost.
arXiv Detail & Related papers (2020-02-21T14:03:16Z) - Bayesian Deep Learning and a Probabilistic Perspective of Generalization [56.69671152009899]
We show that deep ensembles provide an effective mechanism for approximate Bayesian marginalization.
We also propose a related approach that further improves the predictive distribution by marginalizing within basins of attraction.
arXiv Detail & Related papers (2020-02-20T15:13:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.