Approximate sampling and estimation of partition functions using neural
networks
- URL: http://arxiv.org/abs/2209.10423v1
- Date: Wed, 21 Sep 2022 15:16:45 GMT
- Title: Approximate sampling and estimation of partition functions using neural
networks
- Authors: George T. Cantwell
- Abstract summary: We show how variational autoencoders (VAEs) can be applied to this task.
We invert the logic and train the VAE to fit a simple and tractable distribution, on the assumption of a complex and intractable latent distribution, specified up to normalization.
This procedure constructs approximations without the use of training data or Markov chain Monte Carlo sampling.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the closely related problems of sampling from a distribution
known up to a normalizing constant, and estimating said normalizing constant.
We show how variational autoencoders (VAEs) can be applied to this task. In
their standard applications, VAEs are trained to fit data drawn from an
intractable distribution. We invert the logic and train the VAE to fit a simple
and tractable distribution, on the assumption of a complex and intractable
latent distribution, specified up to normalization. This procedure constructs
approximations without the use of training data or Markov chain Monte Carlo
sampling. We illustrate our method on three examples: the Ising model, graph
clustering, and ranking.
Related papers
- A Practical Diffusion Path for Sampling [8.174664278172367]
Diffusion models are used in generative modeling to estimate score vectors guiding a Langevin process.
Previous approaches rely on Monte Carlo estimators that are either computationally heavy to implement or sample-inefficient.
We propose a computationally attractive alternative, relying on the so-called dilation path, that yields score vectors that are available in closed-form.
arXiv Detail & Related papers (2024-06-20T07:00:56Z) - Stochastic Localization via Iterative Posterior Sampling [2.1383136715042417]
We consider a general localization framework and introduce an explicit class of observation processes, associated with flexible denoising schedules.
We provide a complete methodology, $textitStochastic localization via Iterative Posterior Sampling$ (SLIPS), to obtain approximate samples of this dynamics, and as a byproduct, samples from the target distribution.
We illustrate the benefits and applicability of SLIPS on several benchmarks of multi-modal distributions, including mixtures in increasing dimensions, logistic regression and high-dimensional field system from statistical-mechanics.
arXiv Detail & Related papers (2024-02-16T15:28:41Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Adaptive Annealed Importance Sampling with Constant Rate Progress [68.8204255655161]
Annealed Importance Sampling (AIS) synthesizes weighted samples from an intractable distribution.
We propose the Constant Rate AIS algorithm and its efficient implementation for $alpha$-divergences.
arXiv Detail & Related papers (2023-06-27T08:15:28Z) - Tensorizing flows: a tool for variational inference [0.0]
We introduce an extension of normalizing flows in which the Gaussian reference is replaced with a reference distribution constructed via a tensor network.
We show that by combining flows with tensor networks on difficult variational inference tasks, we can improve on the results obtained by using either tool without the other.
arXiv Detail & Related papers (2023-05-03T23:42:22Z) - Data thinning for convolution-closed distributions [2.299914829977005]
We propose data thinning, an approach for splitting an observation into two or more independent parts that sum to the original observation.
We show that data thinning can be used to validate the results of unsupervised learning approaches.
arXiv Detail & Related papers (2023-01-18T02:47:41Z) - Wrapped Distributions on homogeneous Riemannian manifolds [58.720142291102135]
Control over distributions' properties, such as parameters, symmetry and modality yield a family of flexible distributions.
We empirically validate our approach by utilizing our proposed distributions within a variational autoencoder and a latent space network model.
arXiv Detail & Related papers (2022-04-20T21:25:21Z) - Sampling from Arbitrary Functions via PSD Models [55.41644538483948]
We take a two-step approach by first modeling the probability distribution and then sampling from that model.
We show that these models can approximate a large class of densities concisely using few evaluations, and present a simple algorithm to effectively sample from these models.
arXiv Detail & Related papers (2021-10-20T12:25:22Z) - Efficient Bayesian Sampling Using Normalizing Flows to Assist Markov
Chain Monte Carlo Methods [13.649384403827359]
Normalizing flows can generate complex target distributions and show promise in many applications in Bayesian statistics.
Since no data set from the target posterior distribution is available beforehand, the flow is typically trained using the reverse Kullback-Leibler (KL) divergence that only requires samples from a base distribution.
Here we explore a distinct training strategy, using the direct KL divergence as loss, in which samples from the posterior are generated by (i) assisting a local MCMC algorithm on the posterior with a normalizing flow to accelerate its mixing rate and (ii) using the data generated this way to train the flow.
arXiv Detail & Related papers (2021-07-16T16:40:36Z) - Stein Variational Inference for Discrete Distributions [70.19352762933259]
We propose a simple yet general framework that transforms discrete distributions to equivalent piecewise continuous distributions.
Our method outperforms traditional algorithms such as Gibbs sampling and discontinuous Hamiltonian Monte Carlo.
We demonstrate that our method provides a promising tool for learning ensembles of binarized neural network (BNN)
In addition, such transform can be straightforwardly employed in gradient-free kernelized Stein discrepancy to perform goodness-of-fit (GOF) test on discrete distributions.
arXiv Detail & Related papers (2020-03-01T22:45:41Z) - Efficiently Sampling Functions from Gaussian Process Posteriors [76.94808614373609]
We propose an easy-to-use and general-purpose approach for fast posterior sampling.
We demonstrate how decoupled sample paths accurately represent Gaussian process posteriors at a fraction of the usual cost.
arXiv Detail & Related papers (2020-02-21T14:03:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.