Learning Implicit Generative Models with Theoretical Guarantees
- URL: http://arxiv.org/abs/2002.02862v2
- Date: Mon, 17 Feb 2020 14:26:17 GMT
- Title: Learning Implicit Generative Models with Theoretical Guarantees
- Authors: Yuan Gao and Jian Huang and Yuling Jiao and Jin Liu
- Abstract summary: We propose a textbfunified textbfframework for textbfimplicit textbfmodeling (UnifiGem)
UnifiGem integrates approaches from optimal transport, numerical ODE, density-ratio (density-difference) estimation and deep neural networks.
Experimental results on both synthetic datasets and real benchmark datasets support our theoretical findings and demonstrate the effectiveness of UnifiGem.
- Score: 12.761710596142109
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a \textbf{uni}fied \textbf{f}ramework for \textbf{i}mplicit
\textbf{ge}nerative \textbf{m}odeling (UnifiGem) with theoretical guarantees by
integrating approaches from optimal transport, numerical ODE, density-ratio
(density-difference) estimation and deep neural networks. First, the problem of
implicit generative learning is formulated as that of finding the optimal
transport map between the reference distribution and the target distribution,
which is characterized by a totally nonlinear Monge-Amp\`{e}re equation.
Interpreting the infinitesimal linearization of the Monge-Amp\`{e}re equation
from the perspective of gradient flows in measure spaces leads to the
continuity equation or the McKean-Vlasov equation. We then solve the
McKean-Vlasov equation numerically using the forward Euler iteration, where the
forward Euler map depends on the density ratio (density difference) between the
distribution at current iteration and the underlying target distribution. We
further estimate the density ratio (density difference) via deep density-ratio
(density-difference) fitting and derive explicit upper bounds on the estimation
error. Experimental results on both synthetic datasets and real benchmark
datasets support our theoretical findings and demonstrate the effectiveness of
UnifiGem.
Related papers
- Straightness of Rectified Flow: A Theoretical Insight into Wasserstein Convergence [54.580605276017096]
Diffusion models have emerged as a powerful tool for image generation and denoising.
Recently, Liu et al. designed a novel alternative generative model Rectified Flow (RF)
RF aims to learn straight flow trajectories from noise to data using a sequence of convex optimization problems.
arXiv Detail & Related papers (2024-10-19T02:36:11Z) - Deep conditional distribution learning via conditional Föllmer flow [3.227277661633986]
We introduce an ordinary differential equation (ODE) based deep generative method for learning conditional distributions, named Conditional F"ollmer Flow.
For effective implementation, we discretize the flow with Euler's method where we estimate the velocity field nonparametrically using a deep neural network.
arXiv Detail & Related papers (2024-02-02T14:52:10Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Flow-based Distributionally Robust Optimization [23.232731771848883]
We present a framework, called $textttFlowDRO$, for solving flow-based distributionally robust optimization (DRO) problems with Wasserstein uncertainty sets.
We aim to find continuous worst-case distribution (also called the Least Favorable Distribution, LFD) and sample from it.
We demonstrate its usage in adversarial learning, distributionally robust hypothesis testing, and a new mechanism for data-driven distribution perturbation differential privacy.
arXiv Detail & Related papers (2023-10-30T03:53:31Z) - Sobolev Space Regularised Pre Density Models [51.558848491038916]
We propose a new approach to non-parametric density estimation that is based on regularizing a Sobolev norm of the density.
This method is statistically consistent, and makes the inductive validation model clear and consistent.
arXiv Detail & Related papers (2023-07-25T18:47:53Z) - A Geometric Perspective on Diffusion Models [57.27857591493788]
We inspect the ODE-based sampling of a popular variance-exploding SDE.
We establish a theoretical relationship between the optimal ODE-based sampling and the classic mean-shift (mode-seeking) algorithm.
arXiv Detail & Related papers (2023-05-31T15:33:16Z) - Density Ratio Estimation via Infinitesimal Classification [85.08255198145304]
We propose DRE-infty, a divide-and-conquer approach to reduce Density ratio estimation (DRE) to a series of easier subproblems.
Inspired by Monte Carlo methods, we smoothly interpolate between the two distributions via an infinite continuum of intermediate bridge distributions.
We show that our approach performs well on downstream tasks such as mutual information estimation and energy-based modeling on complex, high-dimensional datasets.
arXiv Detail & Related papers (2021-11-22T06:26:29Z) - Deep Generative Learning via Schr\"{o}dinger Bridge [14.138796631423954]
We learn a generative model via entropy with a Schr"odinger Bridge.
We show that the generative model via Schr"odinger Bridge is comparable with state-of-the-art GANs.
arXiv Detail & Related papers (2021-06-19T03:35:42Z) - Generative Learning With Euler Particle Transport [14.557451744544592]
We propose an Euler particle transport (EPT) approach for generative learning.
The proposed approach is motivated by the problem of finding an optimal transport map from a reference distribution to a target distribution.
We show that the proposed density-ratio (difference) estimators do not suffer from the "curse of dimensionality" if data is supported on a lower-dimensional manifold.
arXiv Detail & Related papers (2020-12-11T03:10:53Z) - Generative Modeling with Denoising Auto-Encoders and Langevin Sampling [88.83704353627554]
We show that both DAE and DSM provide estimates of the score of the smoothed population density.
We then apply our results to the homotopy method of arXiv:1907.05600 and provide theoretical justification for its empirical success.
arXiv Detail & Related papers (2020-01-31T23:50:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.