Building Normalizing Flows with Stochastic Interpolants
- URL: http://arxiv.org/abs/2209.15571v1
- Date: Fri, 30 Sep 2022 16:30:31 GMT
- Title: Building Normalizing Flows with Stochastic Interpolants
- Authors: Michael S. Albergo and Eric Vanden-Eijnden
- Abstract summary: A simple generative quadratic model based on a continuous-time normalizing flow between any pair of base and target distributions is proposed.
The velocity field of this flow is inferred from the probability current of a time-dependent distribution that interpolates between the base and the target in finite time.
- Score: 11.22149158986164
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A simple generative model based on a continuous-time normalizing flow between
any pair of base and target distributions is proposed. The velocity field of
this flow is inferred from the probability current of a time-dependent
distribution that interpolates between the base and the target in finite time.
Unlike conventional normalizing flow inference methods based the maximum
likelihood principle, which require costly backpropagation through ODE solvers,
our interpolant approach leads to a simple quadratic loss for the velocity
itself which is expressed in terms of expectations that are readily amenable to
empirical estimation. The flow can be used to generate samples from either the
base or target, and can be used to estimate the likelihood at any time along
the interpolant. The approach is contextualized in its relation to diffusions.
In particular, in situations where the base is a Gaussian distribution, we show
that the velocity of our normalizing flow can also be used to construct a
diffusion model to sample the target as well as estimating its score. This
allows one to map methods based on stochastic differential equations to those
of ordinary differential equations, simplifying the mechanics of the model, but
capturing equivalent dynamics. Benchmarking on density estimation tasks
illustrates that the learned flow can match and surpass maximum likelihood
continuous flows at a fraction of the conventional ODE training costs.
Related papers
- Conditional Lagrangian Wasserstein Flow for Time Series Imputation [3.914746375834628]
We propose a novel method for time series imputation called Conditional Lagrangian Wasserstein Flow.
The proposed method leverages the (conditional) optimal transport theory to learn the probability flow in a simulation-free manner.
The experimental results on the real-word datasets show that the proposed method achieves competitive performance on time series imputation.
arXiv Detail & Related papers (2024-10-10T02:46:28Z) - Flow matching achieves almost minimax optimal convergence [50.38891696297888]
Flow matching (FM) has gained significant attention as a simulation-free generative model.
This paper discusses the convergence properties of FM for large sample size under the $p$-Wasserstein distance.
We establish that FM can achieve an almost minimax optimal convergence rate for $1 leq p leq 2$, presenting the first theoretical evidence that FM can reach convergence rates comparable to those of diffusion models.
arXiv Detail & Related papers (2024-05-31T14:54:51Z) - Deep conditional distribution learning via conditional Föllmer flow [3.227277661633986]
We introduce an ordinary differential equation (ODE) based deep generative method for learning conditional distributions, named Conditional F"ollmer Flow.
For effective implementation, we discretize the flow with Euler's method where we estimate the velocity field nonparametrically using a deep neural network.
arXiv Detail & Related papers (2024-02-02T14:52:10Z) - Reflected Diffusion Models [93.26107023470979]
We present Reflected Diffusion Models, which reverse a reflected differential equation evolving on the support of the data.
Our approach learns the score function through a generalized score matching loss and extends key components of standard diffusion models.
arXiv Detail & Related papers (2023-04-10T17:54:38Z) - Stochastic Interpolants: A Unifying Framework for Flows and Diffusions [16.95541777254722]
A class of generative models that unifies flow-based and diffusion-based methods is introduced.
These models extend the framework proposed in Albergo & VandenEijnden (2023), enabling the use of a broad class of continuous-time processes called stochastic interpolants'
These interpolants are built by combining data from the two prescribed densities with an additional latent variable that shapes the bridge in a flexible way.
arXiv Detail & Related papers (2023-03-15T17:43:42Z) - Score-based Continuous-time Discrete Diffusion Models [102.65769839899315]
We extend diffusion models to discrete variables by introducing a Markov jump process where the reverse process denoises via a continuous-time Markov chain.
We show that an unbiased estimator can be obtained via simple matching the conditional marginal distributions.
We demonstrate the effectiveness of the proposed method on a set of synthetic and real-world music and image benchmarks.
arXiv Detail & Related papers (2022-11-30T05:33:29Z) - Efficient CDF Approximations for Normalizing Flows [64.60846767084877]
We build upon the diffeomorphic properties of normalizing flows to estimate the cumulative distribution function (CDF) over a closed region.
Our experiments on popular flow architectures and UCI datasets show a marked improvement in sample efficiency as compared to traditional estimators.
arXiv Detail & Related papers (2022-02-23T06:11:49Z) - Density Ratio Estimation via Infinitesimal Classification [85.08255198145304]
We propose DRE-infty, a divide-and-conquer approach to reduce Density ratio estimation (DRE) to a series of easier subproblems.
Inspired by Monte Carlo methods, we smoothly interpolate between the two distributions via an infinite continuum of intermediate bridge distributions.
We show that our approach performs well on downstream tasks such as mutual information estimation and energy-based modeling on complex, high-dimensional datasets.
arXiv Detail & Related papers (2021-11-22T06:26:29Z) - Resampling Base Distributions of Normalizing Flows [0.0]
We introduce a base distribution for normalizing flows based on learned rejection sampling.
We develop suitable learning algorithms using both maximizing the log-likelihood and the optimization of the reverse Kullback-Leibler divergence.
arXiv Detail & Related papers (2021-10-29T14:44:44Z) - Learning Likelihoods with Conditional Normalizing Flows [54.60456010771409]
Conditional normalizing flows (CNFs) are efficient in sampling and inference.
We present a study of CNFs where the base density to output space mapping is conditioned on an input x, to model conditional densities p(y|x)
arXiv Detail & Related papers (2019-11-29T19:17:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.