Forward Super-Resolution: How Can GANs Learn Hierarchical Generative
Models for Real-World Distributions
- URL: http://arxiv.org/abs/2106.02619v2
- Date: Sun, 2 Apr 2023 06:19:14 GMT
- Title: Forward Super-Resolution: How Can GANs Learn Hierarchical Generative
Models for Real-World Distributions
- Authors: Zeyuan Allen-Zhu and Yuanzhi Li
- Abstract summary: Generative networks (GAN) are among the most successful for learning high-complexity, real-world distributions.
In this paper we show how GANs can efficiently learn to the distribution of real-life images.
- Score: 66.05472746340142
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative adversarial networks (GANs) are among the most successful models
for learning high-complexity, real-world distributions. However, in theory, due
to the highly non-convex, non-concave landscape of the minmax training
objective, GAN remains one of the least understood deep learning models. In
this work, we formally study how GANs can efficiently learn certain
hierarchically generated distributions that are close to the distribution of
real-life images. We prove that when a distribution has a structure that we
refer to as Forward Super-Resolution, then simply training generative
adversarial networks using stochastic gradient descent ascent (SGDA) can learn
this distribution efficiently, both in sample and time complexities. We also
provide empirical evidence that our assumption "forward super-resolution" is
very natural in practice, and the underlying learning mechanisms that we study
in this paper (to allow us efficiently train GAN via SGDA in theory) simulates
the actual learning process of GANs on real-world problems.
Related papers
- Parallelly Tempered Generative Adversarial Networks [7.94957965474334]
A generative adversarial network (GAN) has been a representative backbone model in generative artificial intelligence (AI)
This work analyzes the training instability and inefficiency in the presence of mode collapse by linking it to multimodality in the target distribution.
With our newly developed GAN objective function, the generator can learn all the tempered distributions simultaneously.
arXiv Detail & Related papers (2024-11-18T18:01:13Z) - DiffSG: A Generative Solver for Network Optimization with Diffusion Model [75.27274046562806]
Diffusion generative models can consider a broader range of solutions and exhibit stronger generalization by learning parameters.
We propose a new framework, which leverages intrinsic distribution learning of diffusion generative models to learn high-quality solutions.
arXiv Detail & Related papers (2024-08-13T07:56:21Z) - SMaRt: Improving GANs with Score Matching Regularity [94.81046452865583]
Generative adversarial networks (GANs) usually struggle in learning from highly diverse data, whose underlying manifold is complex.
We show that score matching serves as a promising solution to this issue thanks to its capability of persistently pushing the generated data points towards the real data manifold.
We propose to improve the optimization of GANs with score matching regularity (SMaRt)
arXiv Detail & Related papers (2023-11-30T03:05:14Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - Global-Local Regularization Via Distributional Robustness [26.983769514262736]
Deep neural networks are often vulnerable to adversarial examples and distribution shifts.
Recent approaches leverage distributional robustness optimization (DRO) to find the most challenging distribution.
We propose a novel regularization technique, following the veins of Wasserstein-based DRO framework.
arXiv Detail & Related papers (2022-03-01T15:36:12Z) - Minimax Optimality (Probably) Doesn't Imply Distribution Learning for
GANs [44.4200799586461]
We show that standard cryptographic assumptions imply that this stronger condition is still insufficient.
Our techniques reveal a deep connection between GANs and PRGs, which we believe will lead to further insights into the computational landscape of GANs.
arXiv Detail & Related papers (2022-01-18T18:59:21Z) - Self-Ensembling GAN for Cross-Domain Semantic Segmentation [107.27377745720243]
This paper proposes a self-ensembling generative adversarial network (SE-GAN) exploiting cross-domain data for semantic segmentation.
In SE-GAN, a teacher network and a student network constitute a self-ensembling model for generating semantic segmentation maps, which together with a discriminator, forms a GAN.
Despite its simplicity, we find SE-GAN can significantly boost the performance of adversarial training and enhance the stability of the model.
arXiv Detail & Related papers (2021-12-15T09:50:25Z) - Prb-GAN: A Probabilistic Framework for GAN Modelling [20.181803514993778]
We present a new variation that uses dropout to create a distribution over the network parameters with the posterior learnt using variational inference.
Our methods are extremely simple and require very little modification to existing GAN architecture.
arXiv Detail & Related papers (2021-07-12T08:04:13Z) - Generalization Properties of Optimal Transport GANs with Latent
Distribution Learning [52.25145141639159]
We study how the interplay between the latent distribution and the complexity of the pushforward map affects performance.
Motivated by our analysis, we advocate learning the latent distribution as well as the pushforward map within the GAN paradigm.
arXiv Detail & Related papers (2020-07-29T07:31:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.