Plug-and-Play split Gibbs sampler: embedding deep generative priors in
Bayesian inference
- URL: http://arxiv.org/abs/2304.11134v1
- Date: Fri, 21 Apr 2023 17:17:51 GMT
- Title: Plug-and-Play split Gibbs sampler: embedding deep generative priors in
Bayesian inference
- Authors: Florentin Coeurdoux, Nicolas Dobigeon, Pierre Chainais
- Abstract summary: This paper introduces a plug-and-play sampling algorithm that leverages variable splitting to efficiently sample from a posterior distribution.
It divides the challenging task of posterior sampling into two simpler sampling problems.
Its performance is compared to recent state-of-the-art optimization and sampling methods.
- Score: 12.91637880428221
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces a stochastic plug-and-play (PnP) sampling algorithm
that leverages variable splitting to efficiently sample from a posterior
distribution. The algorithm based on split Gibbs sampling (SGS) draws
inspiration from the alternating direction method of multipliers (ADMM). It
divides the challenging task of posterior sampling into two simpler sampling
problems. The first problem depends on the likelihood function, while the
second is interpreted as a Bayesian denoising problem that can be readily
carried out by a deep generative model. Specifically, for an illustrative
purpose, the proposed method is implemented in this paper using
state-of-the-art diffusion-based generative models. Akin to its deterministic
PnP-based counterparts, the proposed method exhibits the great advantage of not
requiring an explicit choice of the prior distribution, which is rather encoded
into a pre-trained generative model. However, unlike optimization methods
(e.g., PnP-ADMM) which generally provide only point estimates, the proposed
approach allows conventional Bayesian estimators to be accompanied by
confidence intervals at a reasonable additional computational cost. Experiments
on commonly studied image processing problems illustrate the efficiency of the
proposed sampling strategy. Its performance is compared to recent
state-of-the-art optimization and sampling methods.
Related papers
- HJ-sampler: A Bayesian sampler for inverse problems of a stochastic process by leveraging Hamilton-Jacobi PDEs and score-based generative models [1.949927790632678]
This paper builds on the log transform known as the Cole-Hopf transform in Brownian motion contexts.
We develop a new algorithm, named the HJ-sampler, for inference for the inverse problem of a differential equation with given terminal observations.
arXiv Detail & Related papers (2024-09-15T05:30:54Z) - Optimal Budgeted Rejection Sampling for Generative Models [54.050498411883495]
Rejection sampling methods have been proposed to improve the performance of discriminator-based generative models.
We first propose an Optimal Budgeted Rejection Sampling scheme that is provably optimal.
Second, we propose an end-to-end method that incorporates the sampling scheme into the training procedure to further enhance the model's overall performance.
arXiv Detail & Related papers (2023-11-01T11:52:41Z) - Gaussian Cooling and Dikin Walks: The Interior-Point Method for Logconcave Sampling [8.655526882770742]
In the 1990s Nester and Nemirovski developed the Interior-Point Method (IPM) for convex optimization based on self-concordant barriers.
In 2012, Kannan and Narayanan proposed the Dikin walk for uniformly sampling polytopes.
Here we generalize this approach by developing and adapting IPM machinery together with the Dikin walk for poly-time sampling algorithms.
arXiv Detail & Related papers (2023-07-24T17:15:38Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Langevin Monte Carlo for Contextual Bandits [72.00524614312002]
Langevin Monte Carlo Thompson Sampling (LMC-TS) is proposed to directly sample from the posterior distribution in contextual bandits.
We prove that the proposed algorithm achieves the same sublinear regret bound as the best Thompson sampling algorithms for a special case of contextual bandits.
arXiv Detail & Related papers (2022-06-22T17:58:23Z) - Calibrate and Debias Layer-wise Sampling for Graph Convolutional
Networks [39.56471534442315]
This paper revisits the approach from a matrix approximation perspective.
We propose a new principle for constructing sampling probabilities and an efficient debiasing algorithm.
Improvements are demonstrated by extensive analyses of estimation variance and experiments on common benchmarks.
arXiv Detail & Related papers (2022-06-01T15:52:06Z) - Sampling from Arbitrary Functions via PSD Models [55.41644538483948]
We take a two-step approach by first modeling the probability distribution and then sampling from that model.
We show that these models can approximate a large class of densities concisely using few evaluations, and present a simple algorithm to effectively sample from these models.
arXiv Detail & Related papers (2021-10-20T12:25:22Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - Bayesian imaging using Plug & Play priors: when Langevin meets Tweedie [13.476505672245603]
This paper develops theory, methods, and provably convergent algorithms for performing Bayesian inference with priors.
We introduce two algorithms: 1) -ULA (Unadjusted Langevin) Algorithm inference for Monte Carlo sampling and MMSE; and 2) quantitative-SGD (Stochastic Gradient Descent) for inference.
The algorithms are demonstrated on several problems such as image denoisering, inpainting, and denoising, where they are used for point estimation as well as for uncertainty visualisation and regularity.
arXiv Detail & Related papers (2021-03-08T12:46:53Z) - Pathwise Conditioning of Gaussian Processes [72.61885354624604]
Conventional approaches for simulating Gaussian process posteriors view samples as draws from marginal distributions of process values at finite sets of input locations.
This distribution-centric characterization leads to generative strategies that scale cubically in the size of the desired random vector.
We show how this pathwise interpretation of conditioning gives rise to a general family of approximations that lend themselves to efficiently sampling Gaussian process posteriors.
arXiv Detail & Related papers (2020-11-08T17:09:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.