Pareto GAN: Extending the Representational Power of GANs to Heavy-Tailed
Distributions
- URL: http://arxiv.org/abs/2101.09113v1
- Date: Fri, 22 Jan 2021 14:06:02 GMT
- Title: Pareto GAN: Extending the Representational Power of GANs to Heavy-Tailed
Distributions
- Authors: Todd Huster, Jeremy E.J. Cohen, Zinan Lin, Kevin Chan, Charles
Kamhoua, Nandi Leslie, Cho-Yu Jason Chiang, Vyas Sekar
- Abstract summary: We show that existing GAN architectures do a poor job of matching the behavior of heavy-tailed distributions.
We use extreme value theory and the functional properties of neural networks to learn a distribution that matches the behavior of the adversarial features.
- Score: 6.356866333887868
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative adversarial networks (GANs) are often billed as "universal
distribution learners", but precisely what distributions they can represent and
learn is still an open question. Heavy-tailed distributions are prevalent in
many different domains such as financial risk-assessment, physics, and
epidemiology. We observe that existing GAN architectures do a poor job of
matching the asymptotic behavior of heavy-tailed distributions, a problem that
we show stems from their construction. Additionally, when faced with the
infinite moments and large distances between outlier points that are
characteristic of heavy-tailed distributions, common loss functions produce
unstable or near-zero gradients. We address these problems with the Pareto GAN.
A Pareto GAN leverages extreme value theory and the functional properties of
neural networks to learn a distribution that matches the asymptotic behavior of
the marginal distributions of the features. We identify issues with standard
loss functions and propose the use of alternative metric spaces that enable
stable and efficient learning. Finally, we evaluate our proposed approach on a
variety of heavy-tailed datasets.
Related papers
- Identifiable Latent Neural Causal Models [82.14087963690561]
Causal representation learning seeks to uncover latent, high-level causal representations from low-level observed data.
We determine the types of distribution shifts that do contribute to the identifiability of causal representations.
We translate our findings into a practical algorithm, allowing for the acquisition of reliable latent causal representations.
arXiv Detail & Related papers (2024-03-23T04:13:55Z) - Disentanglement of Correlated Factors via Hausdorff Factorized Support [53.23740352226391]
We propose a relaxed disentanglement criterion - the Hausdorff Factorized Support (HFS) criterion - that encourages a factorized support, rather than a factorial distribution.
We show that the use of HFS consistently facilitates disentanglement and recovery of ground-truth factors across a variety of correlation settings and benchmarks.
arXiv Detail & Related papers (2022-10-13T20:46:42Z) - Robust Estimation for Nonparametric Families via Generative Adversarial
Networks [92.64483100338724]
We provide a framework for designing Generative Adversarial Networks (GANs) to solve high dimensional robust statistics problems.
Our work extend these to robust mean estimation, second moment estimation, and robust linear regression.
In terms of techniques, our proposed GAN losses can be viewed as a smoothed and generalized Kolmogorov-Smirnov distance.
arXiv Detail & Related papers (2022-02-02T20:11:33Z) - Investigating Shifts in GAN Output-Distributions [5.076419064097734]
We introduce a loop-training scheme for the systematic investigation of observable shifts between the distributions of real training data and GAN generated data.
Overall, the combination of these methods allows an explorative investigation of innate limitations of current GAN algorithms.
arXiv Detail & Related papers (2021-12-28T09:16:55Z) - Decentralized Local Stochastic Extra-Gradient for Variational
Inequalities [125.62877849447729]
We consider distributed variational inequalities (VIs) on domains with the problem data that is heterogeneous (non-IID) and distributed across many devices.
We make a very general assumption on the computational network that covers the settings of fully decentralized calculations.
We theoretically analyze its convergence rate in the strongly-monotone, monotone, and non-monotone settings.
arXiv Detail & Related papers (2021-06-15T17:45:51Z) - An error analysis of generative adversarial networks for learning
distributions [11.842861158282265]
generative adversarial networks (GANs) learn probability distributions from finite samples.
GANs are able to adaptively learn data distributions with low-dimensional structure or have H"older densities.
Our analysis is based on a new oracle inequality decomposing the estimation error into generator and discriminator approximation error and statistical error.
arXiv Detail & Related papers (2021-05-27T08:55:19Z) - GANs with Conditional Independence Graphs: On Subadditivity of
Probability Divergences [70.30467057209405]
Generative Adversarial Networks (GANs) are modern methods to learn the underlying distribution of a data set.
GANs are designed in a model-free fashion where no additional information about the underlying distribution is available.
We propose a principled design of a model-based GAN that uses a set of simple discriminators on the neighborhoods of the Bayes-net/MRF.
arXiv Detail & Related papers (2020-03-02T04:31:22Z) - When Relation Networks meet GANs: Relation GANs with Triplet Loss [110.7572918636599]
Training stability is still a lingering concern of generative adversarial networks (GANs)
In this paper, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability.
Experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks.
arXiv Detail & Related papers (2020-02-24T11:35:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.