Lessons Learned from the Training of GANs on Artificial Datasets
- URL: http://arxiv.org/abs/2007.06418v2
- Date: Tue, 14 Jul 2020 15:48:16 GMT
- Title: Lessons Learned from the Training of GANs on Artificial Datasets
- Authors: Shichang Tang
- Abstract summary: Generative Adversarial Networks (GANs) have made great progress in synthesizing realistic images in recent years.
GANs are prone to underfitting or overfitting, making the analysis of them difficult and constrained.
We train them on artificial datasets where there are infinitely many samples and the real data distributions are simple.
We find that training mixtures of GANs leads to more performance gain compared to increasing the network depth or width.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative Adversarial Networks (GANs) have made great progress in
synthesizing realistic images in recent years. However, they are often trained
on image datasets with either too few samples or too many classes belonging to
different data distributions. Consequently, GANs are prone to underfitting or
overfitting, making the analysis of them difficult and constrained. Therefore,
in order to conduct a thorough study on GANs while obviating unnecessary
interferences introduced by the datasets, we train them on artificial datasets
where there are infinitely many samples and the real data distributions are
simple, high-dimensional and have structured manifolds. Moreover, the
generators are designed such that optimal sets of parameters exist.
Empirically, we find that under various distance measures, the generator fails
to learn such parameters with the GAN training procedure. We also find that
training mixtures of GANs leads to more performance gain compared to increasing
the network depth or width when the model complexity is high enough. Our
experimental results demonstrate that a mixture of generators can discover
different modes or different classes automatically in an unsupervised setting,
which we attribute to the distribution of the generation and discrimination
tasks across multiple generators and discriminators. As an example of the
generalizability of our conclusions to realistic datasets, we train a mixture
of GANs on the CIFAR-10 dataset and our method significantly outperforms the
state-of-the-art in terms of popular metrics, i.e., Inception Score (IS) and
Fr\'echet Inception Distance (FID).
Related papers
- SMaRt: Improving GANs with Score Matching Regularity [94.81046452865583]
Generative adversarial networks (GANs) usually struggle in learning from highly diverse data, whose underlying manifold is complex.
We show that score matching serves as a promising solution to this issue thanks to its capability of persistently pushing the generated data points towards the real data manifold.
We propose to improve the optimization of GANs with score matching regularity (SMaRt)
arXiv Detail & Related papers (2023-11-30T03:05:14Z) - LD-GAN: Low-Dimensional Generative Adversarial Network for Spectral
Image Generation with Variance Regularization [72.4394510913927]
Deep learning methods are state-of-the-art for spectral image (SI) computational tasks.
GANs enable diverse augmentation by learning and sampling from the data distribution.
GAN-based SI generation is challenging since the high-dimensionality nature of this kind of data hinders the convergence of the GAN training yielding to suboptimal generation.
We propose a statistical regularization to control the low-dimensional representation variance for the autoencoder training and to achieve high diversity of samples generated with the GAN.
arXiv Detail & Related papers (2023-04-29T00:25:02Z) - Distributed Traffic Synthesis and Classification in Edge Networks: A
Federated Self-supervised Learning Approach [83.2160310392168]
This paper proposes FS-GAN to support automatic traffic analysis and synthesis over a large number of heterogeneous datasets.
FS-GAN is composed of multiple distributed Generative Adversarial Networks (GANs)
FS-GAN can classify data of unknown types of service and create synthetic samples that capture the traffic distribution of the unknown types.
arXiv Detail & Related papers (2023-02-01T03:23:11Z) - ScoreMix: A Scalable Augmentation Strategy for Training GANs with
Limited Data [93.06336507035486]
Generative Adversarial Networks (GANs) typically suffer from overfitting when limited training data is available.
We present ScoreMix, a novel and scalable data augmentation approach for various image synthesis tasks.
arXiv Detail & Related papers (2022-10-27T02:55:15Z) - FairGen: Fair Synthetic Data Generation [0.3149883354098941]
We propose a pipeline to generate fairer synthetic data independent of the GAN architecture.
We claim that while generating synthetic data most GANs amplify bias present in the training data but by removing these bias inducing samples, GANs essentially focuses more on real informative samples.
arXiv Detail & Related papers (2022-10-24T08:13:47Z) - MaGNET: Uniform Sampling from Deep Generative Network Manifolds Without
Retraining [9.294580808320534]
We develop a differential geometry based sampler -- coined MaGNET -- that, given any trained DGN, produces samples that are uniformly distributed on the learned manifold.
We prove theoretically and empirically that our technique produces a uniform distribution on the manifold regardless of the training set distribution.
arXiv Detail & Related papers (2021-10-15T11:12:56Z) - VARGAN: Variance Enforcing Network Enhanced GAN [0.6445605125467573]
We introduce a new GAN architecture called variance enforcing GAN ( VARGAN)
VARGAN incorporates a third network to introduce diversity in the generated samples.
High diversity and low computational complexity, as well as fast convergence, make VARGAN a promising model to alleviate mode collapse.
arXiv Detail & Related papers (2021-09-05T16:28:21Z) - On the Fairness of Generative Adversarial Networks (GANs) [1.061960673667643]
Generative adversarial networks (GANs) are one of the greatest advances in AI in recent years.
In this paper, we analyze and highlight fairness concerns of GANs model.
arXiv Detail & Related papers (2021-03-01T12:25:01Z) - Generative Adversarial Networks (GANs): An Overview of Theoretical
Model, Evaluation Metrics, and Recent Developments [9.023847175654602]
Generative Adversarial Network (GAN) is an effective method to produce samples of large-scale data distribution.
GANs provide an appropriate way to learn deep representations without widespread use of labeled training data.
In GANs, the generative model is estimated via a competitive process where the generator and discriminator networks are trained simultaneously.
arXiv Detail & Related papers (2020-05-27T05:56:53Z) - When Relation Networks meet GANs: Relation GANs with Triplet Loss [110.7572918636599]
Training stability is still a lingering concern of generative adversarial networks (GANs)
In this paper, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability.
Experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks.
arXiv Detail & Related papers (2020-02-24T11:35:28Z) - Brainstorming Generative Adversarial Networks (BGANs): Towards
Multi-Agent Generative Models with Distributed Private Datasets [70.62568022925971]
generative adversarial networks (GANs) must be fed by large datasets that adequately represent the data space.
In many scenarios, the available datasets may be limited and distributed across multiple agents, each of which is seeking to learn the distribution of the data on its own.
In this paper, a novel brainstorming GAN (BGAN) architecture is proposed using which multiple agents can generate real-like data samples while operating in a fully distributed manner.
arXiv Detail & Related papers (2020-02-02T02:58:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.