Fostering Diversity in Spatial Evolutionary Generative Adversarial
Networks
- URL: http://arxiv.org/abs/2106.13590v1
- Date: Fri, 25 Jun 2021 12:40:36 GMT
- Title: Fostering Diversity in Spatial Evolutionary Generative Adversarial
Networks
- Authors: Jamal Toutouh and Erik Hemberg and Una-May O'Reilly
- Abstract summary: This article introduces Mustangs, a spatially distributed CoE-GAN, which fosters diversity by using different loss functions during the training.
Experimental analysis on MNIST and CelebA demonstrated that Mustangs trains statistically more accurate generators.
- Score: 10.603020431394157
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative adversary networks (GANs) suffer from training pathologies such as
instability and mode collapse, which mainly arise from a lack of diversity in
their adversarial interactions. Co-evolutionary GAN (CoE-GAN) training
algorithms have shown to be resilient to these pathologies. This article
introduces Mustangs, a spatially distributed CoE-GAN, which fosters diversity
by using different loss functions during the training. Experimental analysis on
MNIST and CelebA demonstrated that Mustangs trains statistically more accurate
generators.
Related papers
- Latent State Models of Training Dynamics [51.88132043461152]
We train models with different random seeds and compute a variety of metrics throughout training.
We then fit a hidden Markov model (HMM) over the resulting sequences of metrics.
We use the HMM representation to study phase transitions and identify latent "detour" states that slow down convergence.
arXiv Detail & Related papers (2023-08-18T13:20:08Z) - Effective Dynamics of Generative Adversarial Networks [16.51305515824504]
Generative adversarial networks (GANs) are a class of machine-learning models that use adversarial training to generate new samples.
One major form of training failure, known as mode collapse, involves the generator failing to reproduce the full diversity of modes in the target probability distribution.
We present an effective model of GAN training, which captures the learning dynamics by replacing the generator neural network with a collection of particles in the output space.
arXiv Detail & Related papers (2022-12-08T22:04:01Z) - Safety-compliant Generative Adversarial Networks for Human Trajectory
Forecasting [95.82600221180415]
Human forecasting in crowds presents the challenges of modelling social interactions and outputting collision-free multimodal distribution.
We introduce SGANv2, an improved safety-compliant SGAN architecture equipped with motion-temporal interaction modelling and a transformer-based discriminator design.
arXiv Detail & Related papers (2022-09-25T15:18:56Z) - Enhancing Adversarial Training with Feature Separability [52.39305978984573]
We introduce a new concept of adversarial training graph (ATG) with which the proposed adversarial training with feature separability (ATFS) enables to boost the intra-class feature similarity and increase inter-class feature variance.
Through comprehensive experiments, we demonstrate that the proposed ATFS framework significantly improves both clean and robust performance.
arXiv Detail & Related papers (2022-05-02T04:04:23Z) - IE-GAN: An Improved Evolutionary Generative Adversarial Network Using a
New Fitness Function and a Generic Crossover Operator [20.100388977505002]
We propose an improved E-GAN framework called IE-GAN, which introduces a new fitness function and a generic crossover operator.
In particular, the proposed fitness function can model the evolutionary process of individuals more accurately.
The crossover operator, which has been commonly adopted in evolutionary algorithms, can enable offspring to imitate the superior gene expression of their parents.
arXiv Detail & Related papers (2021-07-25T13:55:07Z) - Training Generative Adversarial Networks in One Stage [58.983325666852856]
We introduce a general training scheme that enables training GANs efficiently in only one stage.
We show that the proposed method is readily applicable to other adversarial-training scenarios, such as data-free knowledge distillation.
arXiv Detail & Related papers (2021-02-28T09:03:39Z) - Evolutionary Generative Adversarial Networks with Crossover Based
Knowledge Distillation [4.044110325063562]
We propose a general crossover operator, which can be widely applied to GANs using evolutionary strategies.
We then design an evolutionary GAN framework C-GAN based on it.
And we combine the crossover operator with evolutionary generative adversarial networks (EGAN) to implement the evolutionary generative adversarial networks with crossover (CE-GAN)
arXiv Detail & Related papers (2021-01-27T03:24:30Z) - Analyzing the Components of Distributed Coevolutionary GAN Training [8.198369743955528]
We investigate the impact on the performance of two algorithm components that influence the diversity during coevolution.
In experiments on MNIST dataset, we find that the combination of these two components provides the best generative models.
arXiv Detail & Related papers (2020-08-03T18:35:06Z) - Improving GAN Training with Probability Ratio Clipping and Sample
Reweighting [145.5106274085799]
generative adversarial networks (GANs) often suffer from inferior performance due to unstable training.
We propose a new variational GAN training framework which enjoys superior training stability.
By plugging the training approach in diverse state-of-the-art GAN architectures, we obtain significantly improved performance over a range of tasks.
arXiv Detail & Related papers (2020-06-12T01:39:48Z) - Host-Pathongen Co-evolution Inspired Algorithm Enables Robust GAN
Training [0.0]
Generative adversarial networks (GANs) are pairs of artificial neural networks that are trained one against each other.
GANs have allowed for the generation of impressive imitations of real-life films, images and texts, whose fakeness is barely noticeable to humans.
We propose a more robust algorithm for GANs training. We empirically show the increased stability and a better ability to generate high-quality images while using less computational power.
arXiv Detail & Related papers (2020-05-22T09:54:06Z) - When Relation Networks meet GANs: Relation GANs with Triplet Loss [110.7572918636599]
Training stability is still a lingering concern of generative adversarial networks (GANs)
In this paper, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability.
Experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks.
arXiv Detail & Related papers (2020-02-24T11:35:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.