Sampling in Combinatorial Spaces with SurVAE Flow Augmented MCMC
- URL: http://arxiv.org/abs/2102.02374v1
- Date: Thu, 4 Feb 2021 02:21:08 GMT
- Title: Sampling in Combinatorial Spaces with SurVAE Flow Augmented MCMC
- Authors: Priyank Jaini, Didrik Nielsen and Max Welling
- Abstract summary: Hybrid Monte Carlo is a powerful Markov Chain Monte Carlo method for sampling from complex continuous distributions.
We introduce a new approach based on augmenting Monte Carlo methods with SurVAE Flows to sample from discrete distributions.
We demonstrate the efficacy of our algorithm on a range of examples from statistics, computational physics and machine learning, and observe improvements compared to alternative algorithms.
- Score: 83.48593305367523
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hybrid Monte Carlo is a powerful Markov Chain Monte Carlo method for sampling
from complex continuous distributions. However, a major limitation of HMC is
its inability to be applied to discrete domains due to the lack of gradient
signal. In this work, we introduce a new approach based on augmenting Monte
Carlo methods with SurVAE Flows to sample from discrete distributions using a
combination of neural transport methods like normalizing flows and variational
dequantization, and the Metropolis-Hastings rule. Our method first learns a
continuous embedding of the discrete space using a surjective map and
subsequently learns a bijective transformation from the continuous space to an
approximately Gaussian distributed latent variable. Sampling proceeds by
simulating MCMC chains in the latent space and mapping these samples to the
target discrete space via the learned transformations. We demonstrate the
efficacy of our algorithm on a range of examples from statistics, computational
physics and machine learning, and observe improvements compared to alternative
algorithms.
Related papers
- Ai-Sampler: Adversarial Learning of Markov kernels with involutive maps [28.229819253644862]
We propose a method to parameterize and train transition kernels of Markov chains to achieve efficient sampling and good mixing.
This training procedure minimizes the total variation distance between the stationary distribution of the chain and the empirical distribution of the data.
arXiv Detail & Related papers (2024-06-04T17:00:14Z) - Markovian Flow Matching: Accelerating MCMC with Continuous Normalizing Flows [2.2530496464901106]
Continuous normalizing flows (CNFs) learn the probability path between a reference distribution and a target distribution by modeling the vector field generating said path using neural networks.
Recently, Lipman et al. (2022) introduced a simple and inexpensive method for training CNFs in generative modeling, termed flow matching (FM)
In this paper, we repurpose this method for probabilistic inference by incorporating Markovian sampling methods in evaluating the FM objective, and using the learned CNF to improve Monte Carlo sampling.
arXiv Detail & Related papers (2024-05-23T10:08:19Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Noise-Free Sampling Algorithms via Regularized Wasserstein Proximals [3.4240632942024685]
We consider the problem of sampling from a distribution governed by a potential function.
This work proposes an explicit score based MCMC method that is deterministic, resulting in a deterministic evolution for particles.
arXiv Detail & Related papers (2023-08-28T23:51:33Z) - Reverse Diffusion Monte Carlo [19.35592726471155]
We propose a novel Monte Carlo sampling algorithm called reverse diffusion Monte Carlo (rdMC)
rdMC is distinct from the Markov chain Monte Carlo (MCMC) methods.
arXiv Detail & Related papers (2023-07-05T05:42:03Z) - Sampling with Mollified Interaction Energy Descent [57.00583139477843]
We present a new optimization-based method for sampling called mollified interaction energy descent (MIED)
MIED minimizes a new class of energies on probability measures called mollified interaction energies (MIEs)
We show experimentally that for unconstrained sampling problems our algorithm performs on par with existing particle-based algorithms like SVGD.
arXiv Detail & Related papers (2022-10-24T16:54:18Z) - Reconstructing the Universe with Variational self-Boosted Sampling [7.922637707393503]
Traditional algorithms such as Hamiltonian Monte Carlo (HMC) are computationally inefficient due to generating correlated samples.
Here we develop a hybrid scheme called variational self-boosted sampling (VBS) to mitigate the drawbacks of both algorithms.
VBS generates better quality of samples than simple VI approaches and reduces the correlation length in the sampling phase by a factor of 10-50 over using only HMC.
arXiv Detail & Related papers (2022-06-28T21:30:32Z) - Deterministic Gibbs Sampling via Ordinary Differential Equations [77.42706423573573]
This paper presents a general construction of deterministic measure-preserving dynamics using autonomous ODEs and tools from differential geometry.
We show how Hybrid Monte Carlo and other deterministic samplers follow as special cases of our theory.
arXiv Detail & Related papers (2021-06-18T15:36:09Z) - Plug-And-Play Learned Gaussian-mixture Approximate Message Passing [71.74028918819046]
We propose a plug-and-play compressed sensing (CS) recovery algorithm suitable for any i.i.d. source prior.
Our algorithm builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by adopting a universal denoising function within the algorithm.
Numerical evaluation shows that the L-GM-AMP algorithm achieves state-of-the-art performance without any knowledge of the source prior.
arXiv Detail & Related papers (2020-11-18T16:40:45Z) - Pathwise Conditioning of Gaussian Processes [72.61885354624604]
Conventional approaches for simulating Gaussian process posteriors view samples as draws from marginal distributions of process values at finite sets of input locations.
This distribution-centric characterization leads to generative strategies that scale cubically in the size of the desired random vector.
We show how this pathwise interpretation of conditioning gives rise to a general family of approximations that lend themselves to efficiently sampling Gaussian process posteriors.
arXiv Detail & Related papers (2020-11-08T17:09:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.