The split Gibbs sampler revisited: improvements to its algorithmic
structure and augmented target distribution
- URL: http://arxiv.org/abs/2206.13894v3
- Date: Wed, 3 May 2023 10:58:21 GMT
- Title: The split Gibbs sampler revisited: improvements to its algorithmic
structure and augmented target distribution
- Authors: Marcelo Pereyra, Luis A. Vargas-Mieles, Konstantinos C. Zygalakis
- Abstract summary: Current state-of-the-art methods often address these difficulties by replacing the posterior density with a smooth approximation.
An alternative approach is based on data augmentation and relaxation, where auxiliary variables are introduced in order to construct an approximate augmented posterior distribution.
This paper proposes a new accelerated proximal MCMC method called latent space SK-ROCK, which tightly combines the benefits of the two strategies.
- Score: 1.1279808969568252
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Developing efficient Bayesian computation algorithms for imaging inverse
problems is challenging due to the dimensionality involved and because Bayesian
imaging models are often not smooth. Current state-of-the-art methods often
address these difficulties by replacing the posterior density with a smooth
approximation that is amenable to efficient exploration by using Langevin
Markov chain Monte Carlo (MCMC) methods. An alternative approach is based on
data augmentation and relaxation, where auxiliary variables are introduced in
order to construct an approximate augmented posterior distribution that is
amenable to efficient exploration by Gibbs sampling. This paper proposes a new
accelerated proximal MCMC method called latent space SK-ROCK (ls SK-ROCK),
which tightly combines the benefits of the two aforementioned strategies.
Additionally, instead of viewing the augmented posterior distribution as an
approximation of the original model, we propose to consider it as a
generalisation of this model. Following on from this, we empirically show that
there is a range of values for the relaxation parameter for which the accuracy
of the model improves, and propose a stochastic optimisation algorithm to
automatically identify the optimal amount of relaxation for a given problem. In
this regime, ls SK-ROCK converges faster than competing approaches from the
state of the art, and also achieves better accuracy since the underlying
augmented Bayesian model has a higher Bayesian evidence. The proposed
methodology is demonstrated with a range of numerical experiments related to
image deblurring and inpainting, as well as with comparisons with alternative
approaches from the state of the art. An open-source implementation of the
proposed MCMC methods is available from
https://github.com/luisvargasmieles/ls-MCMC.
Related papers
- Learning Gaussian Representation for Eye Fixation Prediction [54.88001757991433]
Existing eye fixation prediction methods perform the mapping from input images to the corresponding dense fixation maps generated from raw fixation points.
We introduce Gaussian Representation for eye fixation modeling.
We design our framework upon some lightweight backbones to achieve real-time fixation prediction.
arXiv Detail & Related papers (2024-03-21T20:28:22Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Bayesian Pseudo-Coresets via Contrastive Divergence [5.479797073162603]
We introduce a novel approach for constructing pseudo-coresets by utilizing contrastive divergence.
It eliminates the need for approximations in the pseudo-coreset construction process.
We conduct extensive experiments on multiple datasets, demonstrating its superiority over existing BPC techniques.
arXiv Detail & Related papers (2023-03-20T17:13:50Z) - Towards Practical Preferential Bayesian Optimization with Skew Gaussian
Processes [8.198195852439946]
We study preferential Bayesian optimization (BO) where reliable feedback is limited to pairwise comparison called duels.
An important challenge in preferential BO, which uses the preferential Gaussian process (GP) model to represent flexible preference structure, is that the posterior distribution is a computationally intractable skew GP.
We develop a new method that achieves both high computational efficiency and low sample complexity, and then demonstrate its effectiveness through extensive numerical experiments.
arXiv Detail & Related papers (2023-02-03T03:02:38Z) - Langevin Monte Carlo for Contextual Bandits [72.00524614312002]
Langevin Monte Carlo Thompson Sampling (LMC-TS) is proposed to directly sample from the posterior distribution in contextual bandits.
We prove that the proposed algorithm achieves the same sublinear regret bound as the best Thompson sampling algorithms for a special case of contextual bandits.
arXiv Detail & Related papers (2022-06-22T17:58:23Z) - Distributed Sketching for Randomized Optimization: Exact
Characterization, Concentration and Lower Bounds [54.51566432934556]
We consider distributed optimization methods for problems where forming the Hessian is computationally challenging.
We leverage randomized sketches for reducing the problem dimensions as well as preserving privacy and improving straggler resilience in asynchronous distributed systems.
arXiv Detail & Related papers (2022-03-18T05:49:13Z) - Fast Doubly-Adaptive MCMC to Estimate the Gibbs Partition Function with
Weak Mixing Time Bounds [7.428782604099876]
A major obstacle to practical applications of Gibbs distributions is the need to estimate their partition functions.
We present a novel method for reducing the computational complexity of rigorously estimating the partition functions.
arXiv Detail & Related papers (2021-11-14T15:42:02Z) - Nesterov Accelerated ADMM for Fast Diffeomorphic Image Registration [63.15453821022452]
Recent developments in approaches based on deep learning have achieved sub-second runtimes for DiffIR.
We propose a simple iterative scheme that functionally composes intermediate non-stationary velocity fields.
We then propose a convex optimisation model that uses a regularisation term of arbitrary order to impose smoothness on these velocity fields.
arXiv Detail & Related papers (2021-09-26T19:56:45Z) - Structured Stochastic Gradient MCMC [20.68905354115655]
We propose a new non-parametric variational approximation that makes no assumptions about the approximate posterior's functional form.
We obtain better predictive likelihoods and larger effective sample sizes than full SGMCMC.
arXiv Detail & Related papers (2021-07-19T17:18:10Z) - What Are Bayesian Neural Network Posteriors Really Like? [63.950151520585024]
We show that Hamiltonian Monte Carlo can achieve significant performance gains over standard and deep ensembles.
We also show that deep distributions are similarly close to HMC as standard SGLD, and closer than standard variational inference.
arXiv Detail & Related papers (2021-04-29T15:38:46Z) - An adaptive Hessian approximated stochastic gradient MCMC method [12.93317525451798]
We present an adaptive Hessian approximated gradient MCMC method to incorporate local geometric information while sampling from the posterior.
We adopt a magnitude-based weight pruning method to enforce the sparsity of the network.
arXiv Detail & Related papers (2020-10-03T16:22:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.