Parallel Affine Transformation Tuning of Markov Chain Monte Carlo
- URL: http://arxiv.org/abs/2401.16567v2
- Date: Wed, 29 May 2024 01:04:05 GMT
- Title: Parallel Affine Transformation Tuning of Markov Chain Monte Carlo
- Authors: Philip Schär, Michael Habeck, Daniel Rudolf,
- Abstract summary: In particular, we propose a flexible and user-friendly scheme for adaptively learning the affine transformation during sampling.
The combination of our scheme with Gibbsian polar slice sampling is shown to produce samples of high quality at comparatively low computational cost.
- Score: 1.0923877073891446
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The performance of Markov chain Monte Carlo samplers strongly depends on the properties of the target distribution such as its covariance structure, the location of its probability mass and its tail behavior. We explore the use of bijective affine transformations of the sample space to improve the properties of the target distribution and thereby the performance of samplers running in the transformed space. In particular, we propose a flexible and user-friendly scheme for adaptively learning the affine transformation during sampling. Moreover, the combination of our scheme with Gibbsian polar slice sampling is shown to produce samples of high quality at comparatively low computational cost in several settings based on real-world data.
Related papers
- Covariance estimation using Markov chain Monte Carlo [2.209921757303168]
We show that when $pi$ satisfies a Poincar'e inequality and the chain possesses a spectral gap, we can achieve similar sample complexity using MCMC.
We provide guarantees regarding isotropic rounding procedures for sampling uniformly on convex bodies.
arXiv Detail & Related papers (2024-10-22T16:27:29Z) - Adaptive Sampling for Continuous Group Equivariant Neural Networks [5.141137421503899]
We introduce an adaptive sampling approach that dynamically adjusts the sampling process to the symmetries in the data.
Our findings demonstrate improved model performance, and a marginal increase in memory efficiency.
arXiv Detail & Related papers (2024-09-13T11:50:09Z) - Ai-Sampler: Adversarial Learning of Markov kernels with involutive maps [28.229819253644862]
We propose a method to parameterize and train transition kernels of Markov chains to achieve efficient sampling and good mixing.
This training procedure minimizes the total variation distance between the stationary distribution of the chain and the empirical distribution of the data.
arXiv Detail & Related papers (2024-06-04T17:00:14Z) - Differentiating Metropolis-Hastings to Optimize Intractable Densities [51.16801956665228]
We develop an algorithm for automatic differentiation of Metropolis-Hastings samplers.
We apply gradient-based optimization to objectives expressed as expectations over intractable target densities.
arXiv Detail & Related papers (2023-06-13T17:56:02Z) - Optimization of Annealed Importance Sampling Hyperparameters [77.34726150561087]
Annealed Importance Sampling (AIS) is a popular algorithm used to estimates the intractable marginal likelihood of deep generative models.
We present a parameteric AIS process with flexible intermediary distributions and optimize the bridging distributions to use fewer number of steps for sampling.
We assess the performance of our optimized AIS for marginal likelihood estimation of deep generative models and compare it to other estimators.
arXiv Detail & Related papers (2022-09-27T07:58:25Z) - Wrapped Distributions on homogeneous Riemannian manifolds [58.720142291102135]
Control over distributions' properties, such as parameters, symmetry and modality yield a family of flexible distributions.
We empirically validate our approach by utilizing our proposed distributions within a variational autoencoder and a latent space network model.
arXiv Detail & Related papers (2022-04-20T21:25:21Z) - Topographic VAEs learn Equivariant Capsules [84.33745072274942]
We introduce the Topographic VAE: a novel method for efficiently training deep generative models with topographically organized latent variables.
We show that such a model indeed learns to organize its activations according to salient characteristics such as digit class, width, and style on MNIST.
We demonstrate approximate equivariance to complex transformations, expanding upon the capabilities of existing group equivariant neural networks.
arXiv Detail & Related papers (2021-09-03T09:25:57Z) - Parallelised Diffeomorphic Sampling-based Motion Planning [30.310891362316863]
We propose Parallelised Diffeomorphic Sampling-based Motion Planning (PDMP)
PDMP transforms sampling distributions of sampling-based motion planners, in a manner akin to normalising flows.
PDMP is able to leverage gradient information of costs, to inject specifications, in a manner similar to optimisation-based motion planning methods.
arXiv Detail & Related papers (2021-08-26T13:15:11Z) - Oops I Took A Gradient: Scalable Sampling for Discrete Distributions [53.3142984019796]
We show that this approach outperforms generic samplers in a number of difficult settings.
We also demonstrate the use of our improved sampler for training deep energy-based models on high dimensional discrete data.
arXiv Detail & Related papers (2021-02-08T20:08:50Z) - Sampling in Combinatorial Spaces with SurVAE Flow Augmented MCMC [83.48593305367523]
Hybrid Monte Carlo is a powerful Markov Chain Monte Carlo method for sampling from complex continuous distributions.
We introduce a new approach based on augmenting Monte Carlo methods with SurVAE Flows to sample from discrete distributions.
We demonstrate the efficacy of our algorithm on a range of examples from statistics, computational physics and machine learning, and observe improvements compared to alternative algorithms.
arXiv Detail & Related papers (2021-02-04T02:21:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.