Discriminator optimal transport
- URL: http://arxiv.org/abs/1910.06832v3
- Date: Tue, 8 Aug 2023 07:50:36 GMT
- Title: Discriminator optimal transport
- Authors: Akinori Tanaka
- Abstract summary: We show that discriminator optimization process increases a lower bound of the dual cost function for the Wasserstein distance between the target distribution $p$ and the generator distribution $p_G$.
It implies that the trained discriminator can approximate optimal transport (OT) from $p_G$ to $p$.
We show that it improves inception score and FID calculated by un-conditional GAN trained by CIFAR-10, STL-10 and a public pre-trained model of conditional GAN by ImageNet.
- Score: 6.624726878647543
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Within a broad class of generative adversarial networks, we show that
discriminator optimization process increases a lower bound of the dual cost
function for the Wasserstein distance between the target distribution $p$ and
the generator distribution $p_G$. It implies that the trained discriminator can
approximate optimal transport (OT) from $p_G$ to $p$.Based on some experiments
and a bit of OT theory, we propose a discriminator optimal transport (DOT)
scheme to improve generated images. We show that it improves inception score
and FID calculated by un-conditional GAN trained by CIFAR-10, STL-10 and a
public pre-trained model of conditional GAN by ImageNet.
Related papers
- Double-Bounded Optimal Transport for Advanced Clustering and
Classification [58.237576976486544]
We propose Doubly Bounded Optimal Transport (DB-OT), which assumes that the target distribution is restricted within two boundaries instead of a fixed one.
We show that our method can achieve good results with our improved inference scheme in the testing stage.
arXiv Detail & Related papers (2024-01-21T07:43:01Z) - Normalizing flows as approximations of optimal transport maps via linear-control neural ODEs [49.1574468325115]
"Normalizing Flows" is related to the task of constructing invertible transport maps between probability measures by means of deep neural networks.
We consider the problem of recovering the $Wamma$-optimal transport map $T$ between absolutely continuous measures $mu,nuinmathcalP(mathbbRn)$ as the flow of a linear-control neural ODE.
arXiv Detail & Related papers (2023-11-02T17:17:03Z) - Computing high-dimensional optimal transport by flow neural networks [22.320632565424745]
This work develops a flow-based model that transports from $P$ to an arbitrary $Q$ where both distributions are only accessible via finite samples.
We propose to learn the dynamic optimal transport between $P$ and $Q$ by training a flow neural network.
The trained optimal transport flow subsequently allows for performing many downstream tasks, including infinitesimal density estimation (DRE) and distribution in the latent space for generative models.
arXiv Detail & Related papers (2023-05-19T17:48:21Z) - SAN: Inducing Metrizability of GAN with Discriminative Normalized Linear Layer [20.667910240515482]
Generative adversarial networks (GANs) learn a target probability distribution by optimizing a generator and a discriminator with minimax objectives.
This paper addresses the question of whether such optimization actually provides the generator with gradients that make its distribution close to the target distribution.
We propose a novel GAN training scheme, called slicing adversarial network (SAN)
arXiv Detail & Related papers (2023-01-30T12:03:44Z) - Generalized Differentiable RANSAC [95.95627475224231]
$nabla$-RANSAC is a differentiable RANSAC that allows learning the entire randomized robust estimation pipeline.
$nabla$-RANSAC is superior to the state-of-the-art in terms of accuracy while running at a similar speed to its less accurate alternatives.
arXiv Detail & Related papers (2022-12-26T15:13:13Z) - Training Wasserstein GANs without gradient penalties [4.0489350374378645]
We propose a stable method to train Wasserstein generative adversarial networks.
We experimentally show that this algorithm can effectively enforce the Lipschitz constraint on the discriminator.
Our method requires no gradient penalties and is computationally more efficient than other methods.
arXiv Detail & Related papers (2021-10-27T03:46:13Z) - Convergence and Sample Complexity of SGD in GANs [15.25030172685628]
We provide convergence guarantees on training Generative Adversarial Networks (GANs) via SGD.
We consider learning a target distribution modeled by a 1-layer Generator network with a non-linear activation function.
Our results apply to a broad class of non-linear activation functions $phi$, including ReLUs and is enabled by a connection with truncated statistics.
arXiv Detail & Related papers (2020-12-01T18:50:38Z) - Sampling-Decomposable Generative Adversarial Recommender [84.05894139540048]
We propose a Sampling-Decomposable Generative Adversarial Recommender (SD-GAR)
In the framework, the divergence between some generator and the optimum is compensated by self-normalized importance sampling.
We extensively evaluate the proposed algorithm with five real-world recommendation datasets.
arXiv Detail & Related papers (2020-11-02T13:19:10Z) - Discriminator Contrastive Divergence: Semi-Amortized Generative Modeling
by Exploring Energy of the Discriminator [85.68825725223873]
Generative Adversarial Networks (GANs) have shown great promise in modeling high dimensional data.
We introduce the Discriminator Contrastive Divergence, which is well motivated by the property of WGAN's discriminator.
We demonstrate the benefits of significant improved generation on both synthetic data and several real-world image generation benchmarks.
arXiv Detail & Related papers (2020-04-05T01:50:16Z) - Your GAN is Secretly an Energy-based Model and You Should use
Discriminator Driven Latent Sampling [106.68533003806276]
We show that sampling in latent space can be achieved by sampling in latent space according to an energy-based model induced by the sum of the latent prior log-density and the discriminator output score.
We show that Discriminator Driven Latent Sampling(DDLS) is highly efficient compared to previous methods which work in the high-dimensional pixel space.
arXiv Detail & Related papers (2020-03-12T23:33:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.