Learning Optimal Flows for Non-Equilibrium Importance Sampling
- URL: http://arxiv.org/abs/2206.09908v1
- Date: Mon, 20 Jun 2022 17:25:26 GMT
- Title: Learning Optimal Flows for Non-Equilibrium Importance Sampling
- Authors: Yu Cao and Eric Vanden-Eijnden
- Abstract summary: We develop a method to perform calculations based on generating samples from a simple base distribution, transporting them along the flow generated by a velocity field, and performing averages along these flowlines.
On the theory side we discuss how to tailor the velocity field to the target and establish general conditions under which the proposed estimator is a perfect estimator.
On the computational side we show how to use deep learning to represent the velocity field by a neural network and train it towards the zero variance optimum.
- Score: 13.469239537683299
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many applications in computational sciences and statistical inference require
the computation of expectations with respect to complex high-dimensional
distributions with unknown normalization constants, as well as the estimation
of these constants. Here we develop a method to perform these calculations
based on generating samples from a simple base distribution, transporting them
along the flow generated by a velocity field, and performing averages along
these flowlines. This non-equilibrium importance sampling (NEIS) strategy is
straightforward to implement, and can be used for calculations with arbitrary
target distributions. On the theory side we discuss how to tailor the velocity
field to the target and establish general conditions under which the proposed
estimator is a perfect estimator, with zero-variance. We also draw connections
between NEIS and approaches based on mapping a base distribution onto a target
via a transport map. On the computational side we show how to use deep learning
to represent the velocity field by a neural network and train it towards the
zero variance optimum. These results are illustrated numerically on high
dimensional examples, where we show that training the velocity field can
decrease the variance of the NEIS estimator by up to 6 order of magnitude
compared to a vanilla estimator. We also show that NEIS performs better on
these examples than Neal's annealed importance sampling (AIS).
Related papers
- Dynamical Measure Transport and Neural PDE Solvers for Sampling [77.38204731939273]
We tackle the task of sampling from a probability density as transporting a tractable density function to the target.
We employ physics-informed neural networks (PINNs) to approximate the respective partial differential equations (PDEs) solutions.
PINNs allow for simulation- and discretization-free optimization and can be trained very efficiently.
arXiv Detail & Related papers (2024-07-10T17:39:50Z) - Liouville Flow Importance Sampler [2.3603292593876324]
We present the Liouville Flow Importance Sampler (LFIS), an innovative flow-based model for generating samples from unnormalized density functions.
LFIS learns a time-dependent velocity field that deterministically transports samples from a simple initial distribution to a complex target distribution.
We demonstrate the effectiveness of LFIS through its application to a range of benchmark problems, on many of which LFIS achieved state-of-the-art performance.
arXiv Detail & Related papers (2024-05-03T16:44:31Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Training normalizing flows with computationally intensive target
probability distributions [0.018416014644193065]
We propose an estimator for normalizing flows based on the REINFORCE algorithm.
It is up to ten times faster in terms of the wall-clock time and requires up to $30%$ less memory.
arXiv Detail & Related papers (2023-08-25T10:40:46Z) - Adaptive Annealed Importance Sampling with Constant Rate Progress [68.8204255655161]
Annealed Importance Sampling (AIS) synthesizes weighted samples from an intractable distribution.
We propose the Constant Rate AIS algorithm and its efficient implementation for $alpha$-divergences.
arXiv Detail & Related papers (2023-06-27T08:15:28Z) - Efficient CDF Approximations for Normalizing Flows [64.60846767084877]
We build upon the diffeomorphic properties of normalizing flows to estimate the cumulative distribution function (CDF) over a closed region.
Our experiments on popular flow architectures and UCI datasets show a marked improvement in sample efficiency as compared to traditional estimators.
arXiv Detail & Related papers (2022-02-23T06:11:49Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Unrolling Particles: Unsupervised Learning of Sampling Distributions [102.72972137287728]
Particle filtering is used to compute good nonlinear estimates of complex systems.
We show in simulations that the resulting particle filter yields good estimates in a wide range of scenarios.
arXiv Detail & Related papers (2021-10-06T16:58:34Z) - Statistical Optimal Transport posed as Learning Kernel Embedding [0.0]
This work takes the novel approach of posing statistical Optimal Transport (OT) as that of learning the transport plan's kernel mean embedding from sample based estimates of marginal embeddings.
A key result is that, under very mild conditions, $epsilon$-optimal recovery of the transport plan as well as the Barycentric-projection based transport map is possible with a sample complexity that is completely dimension-free.
arXiv Detail & Related papers (2020-02-08T14:58:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.