Stochastic interpolants with data-dependent couplings
- URL: http://arxiv.org/abs/2310.03725v2
- Date: Fri, 15 Dec 2023 18:44:46 GMT
- Title: Stochastic interpolants with data-dependent couplings
- Authors: Michael S. Albergo, Mark Goldstein, Nicholas M. Boffi, Rajesh
Ranganath, Eric Vanden-Eijnden
- Abstract summary: We use the framework of interpolants to formalize how to itcouple the base and the target densities.
We show that these transport maps can be learned by solving a simple square loss regression problem analogous to the standard independent setting.
- Score: 33.75376251800151
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative models inspired by dynamical transport of measure -- such as flows
and diffusions -- construct a continuous-time map between two probability
densities. Conventionally, one of these is the target density, only accessible
through samples, while the other is taken as a simple base density that is
data-agnostic. In this work, using the framework of stochastic interpolants, we
formalize how to \textit{couple} the base and the target densities, whereby
samples from the base are computed conditionally given samples from the target
in a way that is different from (but does preclude) incorporating information
about class labels or continuous embeddings. This enables us to construct
dynamical transport maps that serve as conditional generative models. We show
that these transport maps can be learned by solving a simple square loss
regression problem analogous to the standard independent setting. We
demonstrate the usefulness of constructing dependent couplings in practice
through experiments in super-resolution and in-painting.
Related papers
- Flow Map Matching [15.520853806024943]
Flow map matching is an algorithm that learns the two-time flow map of an underlying ordinary differential equation.
We show that flow map matching leads to high-quality samples with significantly reduced sampling cost compared to diffusion or interpolant methods.
arXiv Detail & Related papers (2024-06-11T17:41:26Z) - On the Trajectory Regularity of ODE-based Diffusion Sampling [79.17334230868693]
Diffusion-based generative models use differential equations to establish a smooth connection between a complex data distribution and a tractable prior distribution.
In this paper, we identify several intriguing trajectory properties in the ODE-based sampling process of diffusion models.
arXiv Detail & Related papers (2024-05-18T15:59:41Z) - Time-series Generation by Contrastive Imitation [87.51882102248395]
We study a generative framework that seeks to combine the strengths of both: Motivated by a moment-matching objective to mitigate compounding error, we optimize a local (but forward-looking) transition policy.
At inference, the learned policy serves as the generator for iterative sampling, and the learned energy serves as a trajectory-level measure for evaluating sample quality.
arXiv Detail & Related papers (2023-11-02T16:45:25Z) - Reflected Diffusion Models [93.26107023470979]
We present Reflected Diffusion Models, which reverse a reflected differential equation evolving on the support of the data.
Our approach learns the score function through a generalized score matching loss and extends key components of standard diffusion models.
arXiv Detail & Related papers (2023-04-10T17:54:38Z) - Stochastic Interpolants: A Unifying Framework for Flows and Diffusions [16.95541777254722]
A class of generative models that unifies flow-based and diffusion-based methods is introduced.
These models extend the framework proposed in Albergo & VandenEijnden (2023), enabling the use of a broad class of continuous-time processes called stochastic interpolants'
These interpolants are built by combining data from the two prescribed densities with an additional latent variable that shapes the bridge in a flexible way.
arXiv Detail & Related papers (2023-03-15T17:43:42Z) - Score-based Continuous-time Discrete Diffusion Models [102.65769839899315]
We extend diffusion models to discrete variables by introducing a Markov jump process where the reverse process denoises via a continuous-time Markov chain.
We show that an unbiased estimator can be obtained via simple matching the conditional marginal distributions.
We demonstrate the effectiveness of the proposed method on a set of synthetic and real-world music and image benchmarks.
arXiv Detail & Related papers (2022-11-30T05:33:29Z) - Concrete Score Matching: Generalized Score Matching for Discrete Data [109.12439278055213]
"Concrete score" is a generalization of the (Stein) score for discrete settings.
"Concrete Score Matching" is a framework to learn such scores from samples.
arXiv Detail & Related papers (2022-11-02T00:41:37Z) - Conditional Permutation Invariant Flows [23.740061786510417]
We present a conditional generative probabilistic model of set-valued data with a tractable log density.
These dynamics are driven by a learnable per-set-element term and pairwise interactions, both parametrized by deep neural networks.
We illustrate the utility of this model via applications including (1) complex traffic scene generation conditioned on visually specified map information, and (2) object bounding box generation conditioned directly on images.
arXiv Detail & Related papers (2022-06-17T21:43:38Z) - Density Ratio Estimation via Infinitesimal Classification [85.08255198145304]
We propose DRE-infty, a divide-and-conquer approach to reduce Density ratio estimation (DRE) to a series of easier subproblems.
Inspired by Monte Carlo methods, we smoothly interpolate between the two distributions via an infinite continuum of intermediate bridge distributions.
We show that our approach performs well on downstream tasks such as mutual information estimation and energy-based modeling on complex, high-dimensional datasets.
arXiv Detail & Related papers (2021-11-22T06:26:29Z) - ICON: Learning Regular Maps Through Inverse Consistency [19.27928605302463]
We explore what induces regularity for spatial transformations, e.g., when computing image registrations.
We find that deep networks combined with an inverse consistency loss and randomized off-grid yield well behaved, approximately diffeomorphic, spatial transformations.
Despite the simplicity of this approach, our experiments present compelling evidence, on both synthetic and real data, that regular maps can be obtained without carefully tuned explicit regularizers and competitive registration performance.
arXiv Detail & Related papers (2021-05-10T15:52:12Z) - Variational Mixture of Normalizing Flows [0.0]
Deep generative models, such as generative adversarial networks autociteGAN, variational autoencoders autocitevaepaper, and their variants, have seen wide adoption for the task of modelling complex data distributions.
Normalizing flows have overcome this limitation by leveraging the change-of-suchs formula for probability density functions.
The present work overcomes this by using normalizing flows as components in a mixture model and devising an end-to-end training procedure for such a model.
arXiv Detail & Related papers (2020-09-01T17:20:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.