PriorGAN: Real Data Prior for Generative Adversarial Nets
- URL: http://arxiv.org/abs/2006.16990v1
- Date: Tue, 30 Jun 2020 17:51:47 GMT
- Title: PriorGAN: Real Data Prior for Generative Adversarial Nets
- Authors: Shuyang Gu, Jianmin Bao, Dong Chen, Fang Wen
- Abstract summary: We propose a novel prior that captures the whole real data distribution for GANs, which are called PriorGANs.
Our experiments demonstrate that PriorGANs outperform the state-of-the-art on the CIFAR-10, FFHQ, LSUN-cat, and LSUN-bird datasets by large margins.
- Score: 36.01759301994946
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative adversarial networks (GANs) have achieved rapid progress in
learning rich data distributions. However, we argue about two main issues in
existing techniques. First, the low quality problem where the learned
distribution has massive low quality samples. Second, the missing modes problem
where the learned distribution misses some certain regions of the real data
distribution. To address these two issues, we propose a novel prior that
captures the whole real data distribution for GANs, which are called PriorGANs.
To be specific, we adopt a simple yet elegant Gaussian Mixture Model (GMM) to
build an explicit probability distribution on the feature level for the whole
real data. By maximizing the probability of generated data, we can push the low
quality samples to high quality. Meanwhile, equipped with the prior, we can
estimate the missing modes in the learned distribution and design a sampling
strategy on the real data to solve the problem. The proposed real data prior
can generalize to various training settings of GANs, such as LSGAN, WGAN-GP,
SNGAN, and even the StyleGAN. Our experiments demonstrate that PriorGANs
outperform the state-of-the-art on the CIFAR-10, FFHQ, LSUN-cat, and LSUN-bird
datasets by large margins.
Related papers
- Theoretically Guaranteed Distribution Adaptable Learning [23.121014921407898]
We propose a novel framework called Distribution Adaptable Learning (DAL)
DAL enables the model to effectively track the evolving data distributions.
It can enhance the reusable and evolvable properties of DAL in accommodating evolving distributions.
arXiv Detail & Related papers (2024-11-05T09:10:39Z) - Federated Learning for distribution skewed data using sample weights [3.6039117546761155]
This work focuses on improving federated learning performance for skewed data distribution across clients.
The main idea is to adjust the client distribution closer to the global distribution using sample weights.
We show that the proposed method not only improves federated learning accuracy but also significantly reduces communication costs.
arXiv Detail & Related papers (2024-01-05T00:46:11Z) - SMaRt: Improving GANs with Score Matching Regularity [94.81046452865583]
Generative adversarial networks (GANs) usually struggle in learning from highly diverse data, whose underlying manifold is complex.
We show that score matching serves as a promising solution to this issue thanks to its capability of persistently pushing the generated data points towards the real data manifold.
We propose to improve the optimization of GANs with score matching regularity (SMaRt)
arXiv Detail & Related papers (2023-11-30T03:05:14Z) - MaGNET: Uniform Sampling from Deep Generative Network Manifolds Without
Retraining [9.294580808320534]
We develop a differential geometry based sampler -- coined MaGNET -- that, given any trained DGN, produces samples that are uniformly distributed on the learned manifold.
We prove theoretically and empirically that our technique produces a uniform distribution on the manifold regardless of the training set distribution.
arXiv Detail & Related papers (2021-10-15T11:12:56Z) - Improving Generative Adversarial Networks with Local Coordinate Coding [150.24880482480455]
Generative adversarial networks (GANs) have shown remarkable success in generating realistic data from some predefined prior distribution.
In practice, semantic information might be represented by some latent distribution learned from data.
We propose an LCCGAN model with local coordinate coding (LCC) to improve the performance of generating data.
arXiv Detail & Related papers (2020-07-28T09:17:50Z) - Lessons Learned from the Training of GANs on Artificial Datasets [0.0]
Generative Adversarial Networks (GANs) have made great progress in synthesizing realistic images in recent years.
GANs are prone to underfitting or overfitting, making the analysis of them difficult and constrained.
We train them on artificial datasets where there are infinitely many samples and the real data distributions are simple.
We find that training mixtures of GANs leads to more performance gain compared to increasing the network depth or width.
arXiv Detail & Related papers (2020-07-13T14:51:02Z) - MMCGAN: Generative Adversarial Network with Explicit Manifold Prior [78.58159882218378]
We propose to employ explicit manifold learning as prior to alleviate mode collapse and stabilize training of GAN.
Our experiments on both the toy data and real datasets show the effectiveness of MMCGAN in alleviating mode collapse, stabilizing training, and improving the quality of generated samples.
arXiv Detail & Related papers (2020-06-18T07:38:54Z) - GANs with Conditional Independence Graphs: On Subadditivity of
Probability Divergences [70.30467057209405]
Generative Adversarial Networks (GANs) are modern methods to learn the underlying distribution of a data set.
GANs are designed in a model-free fashion where no additional information about the underlying distribution is available.
We propose a principled design of a model-based GAN that uses a set of simple discriminators on the neighborhoods of the Bayes-net/MRF.
arXiv Detail & Related papers (2020-03-02T04:31:22Z) - Brainstorming Generative Adversarial Networks (BGANs): Towards
Multi-Agent Generative Models with Distributed Private Datasets [70.62568022925971]
generative adversarial networks (GANs) must be fed by large datasets that adequately represent the data space.
In many scenarios, the available datasets may be limited and distributed across multiple agents, each of which is seeking to learn the distribution of the data on its own.
In this paper, a novel brainstorming GAN (BGAN) architecture is proposed using which multiple agents can generate real-like data samples while operating in a fully distributed manner.
arXiv Detail & Related papers (2020-02-02T02:58:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.