Effective Dynamics of Generative Adversarial Networks
- URL: http://arxiv.org/abs/2212.04580v1
- Date: Thu, 8 Dec 2022 22:04:01 GMT
- Title: Effective Dynamics of Generative Adversarial Networks
- Authors: Steven Durr, Youssef Mroueh, Yuhai Tu, and Shenshen Wang
- Abstract summary: Generative adversarial networks (GANs) are a class of machine-learning models that use adversarial training to generate new samples.
One major form of training failure, known as mode collapse, involves the generator failing to reproduce the full diversity of modes in the target probability distribution.
We present an effective model of GAN training, which captures the learning dynamics by replacing the generator neural network with a collection of particles in the output space.
- Score: 16.51305515824504
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative adversarial networks (GANs) are a class of machine-learning models
that use adversarial training to generate new samples with the same
(potentially very complex) statistics as the training samples. One major form
of training failure, known as mode collapse, involves the generator failing to
reproduce the full diversity of modes in the target probability distribution.
Here, we present an effective model of GAN training, which captures the
learning dynamics by replacing the generator neural network with a collection
of particles in the output space; particles are coupled by a universal kernel
valid for certain wide neural networks and high-dimensional inputs. The
generality of our simplified model allows us to study the conditions under
which mode collapse occurs. Indeed, experiments which vary the effective kernel
of the generator reveal a mode collapse transition, the shape of which can be
related to the type of discriminator through the frequency principle. Further,
we find that gradient regularizers of intermediate strengths can optimally
yield convergence through critical damping of the generator dynamics. Our
effective GAN model thus provides an interpretable physical framework for
understanding and improving adversarial training.
Related papers
- Neural Residual Diffusion Models for Deep Scalable Vision Generation [17.931568104324985]
We propose a unified and massively scalable Neural Residual Diffusion Models framework (Neural-RDM)
The proposed neural residual models obtain state-of-the-art scores on image's and video's generative benchmarks.
arXiv Detail & Related papers (2024-06-19T04:57:18Z) - Improving Out-of-Distribution Robustness of Classifiers via Generative
Interpolation [56.620403243640396]
Deep neural networks achieve superior performance for learning from independent and identically distributed (i.i.d.) data.
However, their performance deteriorates significantly when handling out-of-distribution (OoD) data.
We develop a simple yet effective method called Generative Interpolation to fuse generative models trained from multiple domains for synthesizing diverse OoD samples.
arXiv Detail & Related papers (2023-07-23T03:53:53Z) - Unifying GANs and Score-Based Diffusion as Generative Particle Models [18.00326775812974]
We propose a novel framework that unifies particle and adversarial generative models.
This suggests that a generator is an optional addition to any such generative model.
We empirically test the viability of these original models as proofs of concepts of potential applications of our framework.
arXiv Detail & Related papers (2023-05-25T15:20:10Z) - Accurate generation of stochastic dynamics based on multi-model
Generative Adversarial Networks [0.0]
Generative Adversarial Networks (GANs) have shown immense potential in fields such as text and image generation.
Here we quantitatively test this approach by applying it to a prototypical process on a lattice.
Importantly, the discreteness of the model is retained despite the noise.
arXiv Detail & Related papers (2023-05-25T10:41:02Z) - DIFFormer: Scalable (Graph) Transformers Induced by Energy Constrained
Diffusion [66.21290235237808]
We introduce an energy constrained diffusion model which encodes a batch of instances from a dataset into evolutionary states.
We provide rigorous theory that implies closed-form optimal estimates for the pairwise diffusion strength among arbitrary instance pairs.
Experiments highlight the wide applicability of our model as a general-purpose encoder backbone with superior performance in various tasks.
arXiv Detail & Related papers (2023-01-23T15:18:54Z) - Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited
Data [125.7135706352493]
Generative adversarial networks (GANs) typically require ample data for training in order to synthesize high-fidelity images.
Recent studies have shown that training GANs with limited data remains formidable due to discriminator overfitting.
This paper introduces a novel strategy called Adaptive Pseudo Augmentation (APA) to encourage healthy competition between the generator and the discriminator.
arXiv Detail & Related papers (2021-11-12T18:13:45Z) - Generative Adversarial Network for Probabilistic Forecast of Random
Dynamical System [19.742888499307178]
We present a deep learning model for data-driven simulations of random dynamical systems without a distributional assumption.
We propose a regularization strategy for a generative adversarial network based on consistency conditions for the sequential inference problems.
The behavior of the proposed model is studied by using three processes with complex noise structures.
arXiv Detail & Related papers (2021-11-04T19:50:56Z) - Controllable and Compositional Generation with Latent-Space Energy-Based
Models [60.87740144816278]
Controllable generation is one of the key requirements for successful adoption of deep generative models in real-world applications.
In this work, we use energy-based models (EBMs) to handle compositional generation over a set of attributes.
By composing energy functions with logical operators, this work is the first to achieve such compositionality in generating photo-realistic images of resolution 1024x1024.
arXiv Detail & Related papers (2021-10-21T03:31:45Z) - GANs with Variational Entropy Regularizers: Applications in Mitigating
the Mode-Collapse Issue [95.23775347605923]
Building on the success of deep learning, Generative Adversarial Networks (GANs) provide a modern approach to learn a probability distribution from observed samples.
GANs often suffer from the mode collapse issue where the generator fails to capture all existing modes of the input distribution.
We take an information-theoretic approach and maximize a variational lower bound on the entropy of the generated samples to increase their diversity.
arXiv Detail & Related papers (2020-09-24T19:34:37Z) - DCTRGAN: Improving the Precision of Generative Models with Reweighting [1.2622634782102324]
We introduce a post-hoc correction to deep generative models to further improve their fidelity.
The correction takes the form of a reweighting function that can be applied to generated examples.
We show that the weighted GAN examples significantly improve the accuracy of the generated samples without a large loss in statistical power.
arXiv Detail & Related papers (2020-09-03T18:00:27Z) - Unsupervised Controllable Generation with Self-Training [90.04287577605723]
controllable generation with GANs remains a challenging research problem.
We propose an unsupervised framework to learn a distribution of latent codes that control the generator through self-training.
Our framework exhibits better disentanglement compared to other variants such as the variational autoencoder.
arXiv Detail & Related papers (2020-07-17T21:50:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.