Generative Learning With Euler Particle Transport
- URL: http://arxiv.org/abs/2012.06094v1
- Date: Fri, 11 Dec 2020 03:10:53 GMT
- Title: Generative Learning With Euler Particle Transport
- Authors: Yuan Gao, Jian Huang, Yuling Jiao, Jin Liu, Xiliang Lu and Zhijian
Yang
- Abstract summary: We propose an Euler particle transport (EPT) approach for generative learning.
The proposed approach is motivated by the problem of finding an optimal transport map from a reference distribution to a target distribution.
We show that the proposed density-ratio (difference) estimators do not suffer from the "curse of dimensionality" if data is supported on a lower-dimensional manifold.
- Score: 14.557451744544592
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose an Euler particle transport (EPT) approach for generative
learning. The proposed approach is motivated by the problem of finding an
optimal transport map from a reference distribution to a target distribution
characterized by the Monge-Ampere equation. Interpreting the infinitesimal
linearization of the Monge-Ampere equation from the perspective of gradient
flows in measure spaces leads to a stochastic McKean-Vlasov equation. We use
the forward Euler method to solve this equation. The resulting forward Euler
map pushes forward a reference distribution to the target. This map is the
composition of a sequence of simple residual maps, which are computationally
stable and easy to train. The key task in training is the estimation of the
density ratios or differences that determine the residual maps. We estimate the
density ratios (differences) based on the Bregman divergence with a gradient
penalty using deep density-ratio (difference) fitting. We show that the
proposed density-ratio (difference) estimators do not suffer from the "curse of
dimensionality" if data is supported on a lower-dimensional manifold. Numerical
experiments with multi-mode synthetic datasets and comparisons with the
existing methods on real benchmark datasets support our theoretical results and
demonstrate the effectiveness of the proposed method.
Related papers
- Sequential transport maps using SoS density estimation and
$\alpha$-divergences [0.6554326244334866]
Transport-based density estimation methods are receiving growing interest because of their ability to efficiently generate samples from the approximated density.
We build on a sequence of composed Knothe-Rosenblatt (KR) maps and explore the use of Sum-of-Squareimats (SoS) densities and $alpha$-divergences for approxing the intermediate densities.
We numerically demonstrate our methods on several benchmarks, including Bayesian inference problems and unsupervised learning task.
arXiv Detail & Related papers (2024-02-27T23:52:58Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Unbiased Kinetic Langevin Monte Carlo with Inexact Gradients [0.8749675983608172]
We present an unbiased method for posterior means based on kinetic Langevin dynamics.
Our proposed estimator is unbiased, attains finite variance, and satisfies a central limit theorem.
Our results demonstrate that the unbiased algorithm we present can be much more efficient than the gold-standard" randomized Hamiltonian Monte Carlo.
arXiv Detail & Related papers (2023-11-08T21:19:52Z) - Sobolev Space Regularised Pre Density Models [51.558848491038916]
We propose a new approach to non-parametric density estimation that is based on regularizing a Sobolev norm of the density.
This method is statistically consistent, and makes the inductive validation model clear and consistent.
arXiv Detail & Related papers (2023-07-25T18:47:53Z) - Efficient Training of Energy-Based Models Using Jarzynski Equality [13.636994997309307]
Energy-based models (EBMs) are generative models inspired by statistical physics.
The computation of its gradient with respect to the model parameters requires sampling the model distribution.
Here we show how results for nonequilibrium thermodynamics based on Jarzynski equality can be used to perform this computation efficiently.
arXiv Detail & Related papers (2023-05-30T21:07:52Z) - Gaussian process regression and conditional Karhunen-Lo\'{e}ve models
for data assimilation in inverse problems [68.8204255655161]
We present a model inversion algorithm, CKLEMAP, for data assimilation and parameter estimation in partial differential equation models.
The CKLEMAP method provides better scalability compared to the standard MAP method.
arXiv Detail & Related papers (2023-01-26T18:14:12Z) - Score-based Continuous-time Discrete Diffusion Models [102.65769839899315]
We extend diffusion models to discrete variables by introducing a Markov jump process where the reverse process denoises via a continuous-time Markov chain.
We show that an unbiased estimator can be obtained via simple matching the conditional marginal distributions.
We demonstrate the effectiveness of the proposed method on a set of synthetic and real-world music and image benchmarks.
arXiv Detail & Related papers (2022-11-30T05:33:29Z) - Near-optimal estimation of smooth transport maps with kernel
sums-of-squares [81.02564078640275]
Under smoothness conditions, the squared Wasserstein distance between two distributions could be efficiently computed with appealing statistical error upper bounds.
The object of interest for applications such as generative modeling is the underlying optimal transport map.
We propose the first tractable algorithm for which the statistical $L2$ error on the maps nearly matches the existing minimax lower-bounds for smooth map estimation.
arXiv Detail & Related papers (2021-12-03T13:45:36Z) - Density Ratio Estimation via Infinitesimal Classification [85.08255198145304]
We propose DRE-infty, a divide-and-conquer approach to reduce Density ratio estimation (DRE) to a series of easier subproblems.
Inspired by Monte Carlo methods, we smoothly interpolate between the two distributions via an infinite continuum of intermediate bridge distributions.
We show that our approach performs well on downstream tasks such as mutual information estimation and energy-based modeling on complex, high-dimensional datasets.
arXiv Detail & Related papers (2021-11-22T06:26:29Z) - Deep Generative Learning via Schr\"{o}dinger Bridge [14.138796631423954]
We learn a generative model via entropy with a Schr"odinger Bridge.
We show that the generative model via Schr"odinger Bridge is comparable with state-of-the-art GANs.
arXiv Detail & Related papers (2021-06-19T03:35:42Z) - Learning Implicit Generative Models with Theoretical Guarantees [12.761710596142109]
We propose a textbfunified textbfframework for textbfimplicit textbfmodeling (UnifiGem)
UnifiGem integrates approaches from optimal transport, numerical ODE, density-ratio (density-difference) estimation and deep neural networks.
Experimental results on both synthetic datasets and real benchmark datasets support our theoretical findings and demonstrate the effectiveness of UnifiGem.
arXiv Detail & Related papers (2020-02-07T15:55:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.