Making Method of Moments Great Again? -- How can GANs learn
distributions
- URL: http://arxiv.org/abs/2003.04033v3
- Date: Wed, 17 Feb 2021 20:56:37 GMT
- Title: Making Method of Moments Great Again? -- How can GANs learn
distributions
- Authors: Yuanzhi Li, Zehao Dou
- Abstract summary: Generative Adrial Networks (GANs) are widely used models to learn complex real-world distributions.
In GANs, the training of the generator usually stops when the discriminator can no longer distinguish the generator's output from the set of training examples.
We establish a theoretical results towards understanding this generator-discriminator training process.
- Score: 34.91089650516183
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative Adversarial Networks (GANs) are widely used models to learn
complex real-world distributions. In GANs, the training of the generator
usually stops when the discriminator can no longer distinguish the generator's
output from the set of training examples. A central question of GANs is that
when the training stops, whether the generated distribution is actually close
to the target distribution, and how the training process reaches to such
configurations efficiently? In this paper, we established a theoretical results
towards understanding this generator-discriminator training process. We
empirically observe that during the earlier stage of the GANs training, the
discriminator is trying to force the generator to match the low degree moments
between the generator's output and the target distribution. Moreover, only by
matching these empirical moments over polynomially many training examples, we
prove that the generator can already learn notable class of distributions,
including those that can be generated by two-layer neural networks.
Related papers
- Improving Out-of-Distribution Robustness of Classifiers via Generative
Interpolation [56.620403243640396]
Deep neural networks achieve superior performance for learning from independent and identically distributed (i.i.d.) data.
However, their performance deteriorates significantly when handling out-of-distribution (OoD) data.
We develop a simple yet effective method called Generative Interpolation to fuse generative models trained from multiple domains for synthesizing diverse OoD samples.
arXiv Detail & Related papers (2023-07-23T03:53:53Z) - Effective Dynamics of Generative Adversarial Networks [16.51305515824504]
Generative adversarial networks (GANs) are a class of machine-learning models that use adversarial training to generate new samples.
One major form of training failure, known as mode collapse, involves the generator failing to reproduce the full diversity of modes in the target probability distribution.
We present an effective model of GAN training, which captures the learning dynamics by replacing the generator neural network with a collection of particles in the output space.
arXiv Detail & Related papers (2022-12-08T22:04:01Z) - On some theoretical limitations of Generative Adversarial Networks [77.34726150561087]
It is a general assumption that GANs can generate any probability distribution.
We provide a new result based on Extreme Value Theory showing that GANs can't generate heavy tailed distributions.
arXiv Detail & Related papers (2021-10-21T06:10:38Z) - MG-GAN: A Multi-Generator Model Preventing Out-of-Distribution Samples
in Pedestrian Trajectory Prediction [0.6445605125467573]
We propose a multi-generator model for pedestrian trajectory prediction.
Each generator specializes in learning a distribution over trajectories routing towards one of the primary modes in the scene.
A second network learns a categorical distribution over these generators, conditioned on the dynamics and scene input.
This architecture allows us to effectively sample from specialized generators and to significantly reduce the out-of-distribution samples compared to single generator methods.
arXiv Detail & Related papers (2021-08-20T17:10:39Z) - Self-supervised GANs with Label Augmentation [43.78253518292111]
We propose a novel self-supervised GANs framework with label augmentation, i.e., augmenting the GAN labels (real or fake) with the self-supervised pseudo-labels.
We demonstrate that the proposed method significantly outperforms competitive baselines on both generative modeling and representation learning.
arXiv Detail & Related papers (2021-06-16T07:58:00Z) - Forward Super-Resolution: How Can GANs Learn Hierarchical Generative
Models for Real-World Distributions [66.05472746340142]
Generative networks (GAN) are among the most successful for learning high-complexity, real-world distributions.
In this paper we show how GANs can efficiently learn to the distribution of real-life images.
arXiv Detail & Related papers (2021-06-04T17:33:29Z) - Training Generative Adversarial Networks in One Stage [58.983325666852856]
We introduce a general training scheme that enables training GANs efficiently in only one stage.
We show that the proposed method is readily applicable to other adversarial-training scenarios, such as data-free knowledge distillation.
arXiv Detail & Related papers (2021-02-28T09:03:39Z) - The Hidden Tasks of Generative Adversarial Networks: An Alternative
Perspective on GAN Training [1.964574177805823]
We present an alternative perspective on the training of generative adversarial networks (GANs)
We show that the training step for a GAN generator decomposes into two implicit sub-problems.
We experimentally validate our main theoretical result and discuss implications for alternative training methods.
arXiv Detail & Related papers (2021-01-28T08:17:29Z) - Mode Penalty Generative Adversarial Network with adapted Auto-encoder [0.15229257192293197]
We propose a mode penalty GAN combined with pre-trained auto encoder for explicit representation of generated and real data samples in encoded space.
We demonstrate that applying the proposed method to GANs helps generator's optimization becoming more stable and having faster convergence through experimental evaluations.
arXiv Detail & Related papers (2020-11-16T03:39:53Z) - Unsupervised Controllable Generation with Self-Training [90.04287577605723]
controllable generation with GANs remains a challenging research problem.
We propose an unsupervised framework to learn a distribution of latent codes that control the generator through self-training.
Our framework exhibits better disentanglement compared to other variants such as the variational autoencoder.
arXiv Detail & Related papers (2020-07-17T21:50:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.