Wasserstein Geodesic Generator for Conditional Distributions
- URL: http://arxiv.org/abs/2308.10145v3
- Date: Mon, 28 Aug 2023 16:13:03 GMT
- Title: Wasserstein Geodesic Generator for Conditional Distributions
- Authors: Young-geun Kim, Kyungbok Lee, Youngwon Choi, Joong-Ho Won, Myunghee
Cho Paik
- Abstract summary: We propose a novel conditional generation algorithm where conditional distributions are fully characterized by a metric space defined by a statistical distance.
We employ optimal transport theory to propose the Wasserstein geodesic generator, a new conditional generator that learns the Wasserstein geodesic.
- Score: 25.436269587204293
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generating samples given a specific label requires estimating conditional
distributions. We derive a tractable upper bound of the Wasserstein distance
between conditional distributions to lay the theoretical groundwork to learn
conditional distributions. Based on this result, we propose a novel conditional
generation algorithm where conditional distributions are fully characterized by
a metric space defined by a statistical distance. We employ optimal transport
theory to propose the Wasserstein geodesic generator, a new conditional
generator that learns the Wasserstein geodesic. The proposed method learns both
conditional distributions for observed domains and optimal transport maps
between them. The conditional distributions given unobserved intermediate
domains are on the Wasserstein geodesic between conditional distributions given
two observed domain labels. Experiments on face images with light conditions as
domain labels demonstrate the efficacy of the proposed method.
Related papers
- Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers [49.97755400231656]
We present the first performance guarantee with explicit dimensional general score-mismatched diffusion samplers.
We show that score mismatches result in an distributional bias between the target and sampling distributions, proportional to the accumulated mismatch between the target and training distributions.
This result can be directly applied to zero-shot conditional samplers for any conditional model, irrespective of measurement noise.
arXiv Detail & Related papers (2024-10-17T16:42:12Z) - Unveil Conditional Diffusion Models with Classifier-free Guidance: A Sharp Statistical Theory [87.00653989457834]
Conditional diffusion models serve as the foundation of modern image synthesis and find extensive application in fields like computational biology and reinforcement learning.
Despite the empirical success, theory of conditional diffusion models is largely missing.
This paper bridges the gap by presenting a sharp statistical theory of distribution estimation using conditional diffusion models.
arXiv Detail & Related papers (2024-03-18T17:08:24Z) - Deep conditional distribution learning via conditional Föllmer flow [3.227277661633986]
We introduce an ordinary differential equation (ODE) based deep generative method for learning conditional distributions, named Conditional F"ollmer Flow.
For effective implementation, we discretize the flow with Euler's method where we estimate the velocity field nonparametrically using a deep neural network.
arXiv Detail & Related papers (2024-02-02T14:52:10Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Flow Away your Differences: Conditional Normalizing Flows as an
Improvement to Reweighting [0.0]
We present an alternative to reweighting techniques for modifying distributions to account for a desired change in an underlying conditional distribution.
We employ conditional normalizing flows to learn the full conditional probability distribution.
In our examples, this leads to a statistical precision up to three times greater than using reweighting techniques with identical sample sizes for the source and target distributions.
arXiv Detail & Related papers (2023-04-28T16:33:50Z) - Energy-Based Sliced Wasserstein Distance [47.18652387199418]
A key component of the sliced Wasserstein (SW) distance is the slicing distribution.
We propose to design the slicing distribution as an energy-based distribution that is parameter-free.
We then derive a novel sliced Wasserstein metric, energy-based sliced Waserstein (EBSW) distance.
arXiv Detail & Related papers (2023-04-26T14:28:45Z) - Nearest-Neighbor Sampling Based Conditional Independence Testing [15.478671471695794]
Conditional randomization test (CRT) was recently proposed to test whether two random variables X and Y are conditionally independent given random variables Z.
The aim of this paper is to develop a novel alternative of CRT by using nearest-neighbor sampling without assuming the exact form of the distribution of X given Z.
arXiv Detail & Related papers (2023-04-09T07:54:36Z) - Optimal 1-Wasserstein Distance for WGANs [2.1174215880331775]
We provide a thorough analysis of Wasserstein GANs (WGANs) in both the finite sample and regimes.
We derive in passing new results on optimal transport theory in the semi-discrete setting.
arXiv Detail & Related papers (2022-01-08T13:04:03Z) - Wasserstein Generative Learning of Conditional Distribution [6.051520664893158]
We propose a Wasserstein generative approach to learning a conditional distribution.
We establish non-asymptotic error bound of the conditional sampling distribution generated by the proposed method.
arXiv Detail & Related papers (2021-12-19T01:55:01Z) - Learning High Dimensional Wasserstein Geodesics [55.086626708837635]
We propose a new formulation and learning strategy for computing the Wasserstein geodesic between two probability distributions in high dimensions.
By applying the method of Lagrange multipliers to the dynamic formulation of the optimal transport (OT) problem, we derive a minimax problem whose saddle point is the Wasserstein geodesic.
We then parametrize the functions by deep neural networks and design a sample based bidirectional learning algorithm for training.
arXiv Detail & Related papers (2021-02-05T04:25:28Z) - Variational Transport: A Convergent Particle-BasedAlgorithm for Distributional Optimization [106.70006655990176]
A distributional optimization problem arises widely in machine learning and statistics.
We propose a novel particle-based algorithm, dubbed as variational transport, which approximately performs Wasserstein gradient descent.
We prove that when the objective function satisfies a functional version of the Polyak-Lojasiewicz (PL) (Polyak, 1963) and smoothness conditions, variational transport converges linearly.
arXiv Detail & Related papers (2020-12-21T18:33:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.