Ai-Sampler: Adversarial Learning of Markov kernels with involutive maps
- URL: http://arxiv.org/abs/2406.02490v1
- Date: Tue, 4 Jun 2024 17:00:14 GMT
- Title: Ai-Sampler: Adversarial Learning of Markov kernels with involutive maps
- Authors: Evgenii Egorov, Ricardo Valperga, Efstratios Gavves,
- Abstract summary: We propose a method to parameterize and train transition kernels of Markov chains to achieve efficient sampling and good mixing.
This training procedure minimizes the total variation distance between the stationary distribution of the chain and the empirical distribution of the data.
- Score: 28.229819253644862
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Markov chain Monte Carlo methods have become popular in statistics as versatile techniques to sample from complicated probability distributions. In this work, we propose a method to parameterize and train transition kernels of Markov chains to achieve efficient sampling and good mixing. This training procedure minimizes the total variation distance between the stationary distribution of the chain and the empirical distribution of the data. Our approach leverages involutive Metropolis-Hastings kernels constructed from reversible neural networks that ensure detailed balance by construction. We find that reversibility also implies $C_2$-equivariance of the discriminator function which can be used to restrict its function space.
Related papers
- Discrete generative diffusion models without stochastic differential equations: a tensor network approach [1.5839621757142595]
Diffusion models (DMs) are a class of generative machine learning methods.
We show how to use networks (TNs) to efficiently define and sample such discrete models''
arXiv Detail & Related papers (2024-07-15T18:00:11Z) - Markovian Flow Matching: Accelerating MCMC with Continuous Normalizing Flows [2.2530496464901106]
Continuous normalizing flows (CNFs) learn the probability path between a reference distribution and a target distribution by modeling the vector field generating said path using neural networks.
Recently, Lipman et al. (2022) introduced a simple and inexpensive method for training CNFs in generative modeling, termed flow matching (FM)
In this paper, we repurpose this method for probabilistic inference by incorporating Markovian sampling methods in evaluating the FM objective, and using the learned CNF to improve Monte Carlo sampling.
arXiv Detail & Related papers (2024-05-23T10:08:19Z) - Collaborative Heterogeneous Causal Inference Beyond Meta-analysis [68.4474531911361]
We propose a collaborative inverse propensity score estimator for causal inference with heterogeneous data.
Our method shows significant improvements over the methods based on meta-analysis when heterogeneity increases.
arXiv Detail & Related papers (2024-04-24T09:04:36Z) - Uncertainty Quantification via Stable Distribution Propagation [60.065272548502]
We propose a new approach for propagating stable probability distributions through neural networks.
Our method is based on local linearization, which we show to be an optimal approximation in terms of total variation distance for the ReLU non-linearity.
arXiv Detail & Related papers (2024-02-13T09:40:19Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Wrapped Distributions on homogeneous Riemannian manifolds [58.720142291102135]
Control over distributions' properties, such as parameters, symmetry and modality yield a family of flexible distributions.
We empirically validate our approach by utilizing our proposed distributions within a variational autoencoder and a latent space network model.
arXiv Detail & Related papers (2022-04-20T21:25:21Z) - Learn Quasi-stationary Distributions of Finite State Markov Chain [2.780408966503282]
We propose a reinforcement learning (RL) approach to compute the expression of quasi-stationary distribution.
We minimize the KL-divergence of two Markovian path distributions induced by the candidate distribution and the true target distribution.
We derive the corresponding policy gradient theorem and design an actor-critic algorithm to learn the optimal solution and value function.
arXiv Detail & Related papers (2021-11-19T02:56:34Z) - Sampling in Combinatorial Spaces with SurVAE Flow Augmented MCMC [83.48593305367523]
Hybrid Monte Carlo is a powerful Markov Chain Monte Carlo method for sampling from complex continuous distributions.
We introduce a new approach based on augmenting Monte Carlo methods with SurVAE Flows to sample from discrete distributions.
We demonstrate the efficacy of our algorithm on a range of examples from statistics, computational physics and machine learning, and observe improvements compared to alternative algorithms.
arXiv Detail & Related papers (2021-02-04T02:21:08Z) - Distributional Reinforcement Learning via Moment Matching [54.16108052278444]
We formulate a method that learns a finite set of statistics from each return distribution via neural networks.
Our method can be interpreted as implicitly matching all orders of moments between a return distribution and its Bellman target.
Experiments on the suite of Atari games show that our method outperforms the standard distributional RL baselines.
arXiv Detail & Related papers (2020-07-24T05:18:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.