Evolutionary Generative Adversarial Networks with Crossover Based
Knowledge Distillation
- URL: http://arxiv.org/abs/2101.11186v1
- Date: Wed, 27 Jan 2021 03:24:30 GMT
- Title: Evolutionary Generative Adversarial Networks with Crossover Based
Knowledge Distillation
- Authors: Junjie Li, Junwei Zhang, Xiaoyu Gong, Shuai L\"u
- Abstract summary: We propose a general crossover operator, which can be widely applied to GANs using evolutionary strategies.
We then design an evolutionary GAN framework C-GAN based on it.
And we combine the crossover operator with evolutionary generative adversarial networks (EGAN) to implement the evolutionary generative adversarial networks with crossover (CE-GAN)
- Score: 4.044110325063562
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative Adversarial Networks (GAN) is an adversarial model, and it has
been demonstrated to be effective for various generative tasks. However, GAN
and its variants also suffer from many training problems, such as mode collapse
and gradient vanish. In this paper, we firstly propose a general crossover
operator, which can be widely applied to GANs using evolutionary strategies.
Then we design an evolutionary GAN framework C-GAN based on it. And we combine
the crossover operator with evolutionary generative adversarial networks (EGAN)
to implement the evolutionary generative adversarial networks with crossover
(CE-GAN). Under the premise that a variety of loss functions are used as
mutation operators to generate mutation individuals, we evaluate the generated
samples and allow the mutation individuals to learn experiences from the output
in a knowledge distillation manner, imitating the best output outcome,
resulting in better offspring. Then, we greedily selected the best offspring as
parents for subsequent training using discriminator as evaluator. Experiments
on real datasets demonstrate the effectiveness of CE-GAN and show that our
method is competitive in terms of generated images quality and time efficiency.
Related papers
- GE-AdvGAN: Improving the transferability of adversarial samples by
gradient editing-based adversarial generative model [69.71629949747884]
Adversarial generative models, such as Generative Adversarial Networks (GANs), are widely applied for generating various types of data.
In this work, we propose a novel algorithm named GE-AdvGAN to enhance the transferability of adversarial samples.
arXiv Detail & Related papers (2024-01-11T16:43:16Z) - Mind the Gap in Distilling StyleGANs [100.58444291751015]
StyleGAN family is one of the most popular Generative Adversarial Networks (GANs) for unconditional generation.
This paper provides a comprehensive study of distilling from the popular StyleGAN-like architecture.
arXiv Detail & Related papers (2022-08-18T14:18:29Z) - Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited
Data [125.7135706352493]
Generative adversarial networks (GANs) typically require ample data for training in order to synthesize high-fidelity images.
Recent studies have shown that training GANs with limited data remains formidable due to discriminator overfitting.
This paper introduces a novel strategy called Adaptive Pseudo Augmentation (APA) to encourage healthy competition between the generator and the discriminator.
arXiv Detail & Related papers (2021-11-12T18:13:45Z) - The Nuts and Bolts of Adopting Transformer in GANs [124.30856952272913]
We investigate the properties of Transformer in the generative adversarial network (GAN) framework for high-fidelity image synthesis.
Our study leads to a new alternative design of Transformers in GAN, a convolutional neural network (CNN)-free generator termed as STrans-G.
arXiv Detail & Related papers (2021-10-25T17:01:29Z) - IE-GAN: An Improved Evolutionary Generative Adversarial Network Using a
New Fitness Function and a Generic Crossover Operator [20.100388977505002]
We propose an improved E-GAN framework called IE-GAN, which introduces a new fitness function and a generic crossover operator.
In particular, the proposed fitness function can model the evolutionary process of individuals more accurately.
The crossover operator, which has been commonly adopted in evolutionary algorithms, can enable offspring to imitate the superior gene expression of their parents.
arXiv Detail & Related papers (2021-07-25T13:55:07Z) - Fostering Diversity in Spatial Evolutionary Generative Adversarial
Networks [10.603020431394157]
This article introduces Mustangs, a spatially distributed CoE-GAN, which fosters diversity by using different loss functions during the training.
Experimental analysis on MNIST and CelebA demonstrated that Mustangs trains statistically more accurate generators.
arXiv Detail & Related papers (2021-06-25T12:40:36Z) - Epigenetic evolution of deep convolutional models [81.21462458089142]
We build upon a previously proposed neuroevolution framework to evolve deep convolutional models.
We propose a convolutional layer layout which allows kernels of different shapes and sizes to coexist within the same layer.
The proposed layout enables the size and shape of individual kernels within a convolutional layer to be evolved with a corresponding new mutation operator.
arXiv Detail & Related papers (2021-04-12T12:45:16Z) - Training Generative Adversarial Networks in One Stage [58.983325666852856]
We introduce a general training scheme that enables training GANs efficiently in only one stage.
We show that the proposed method is readily applicable to other adversarial-training scenarios, such as data-free knowledge distillation.
arXiv Detail & Related papers (2021-02-28T09:03:39Z) - Demonstrating the Evolution of GANs through t-SNE [0.4588028371034407]
Evolutionary algorithms, such as COEGAN, were recently proposed as a solution to improve the GAN training.
In this work, we propose an evaluation method based on t-distributed Neighbour Embedding (t-SNE) to assess the progress of GANs.
A metric based on the resulting t-SNE maps and the Jaccard index is proposed to represent the model quality.
arXiv Detail & Related papers (2021-01-31T20:07:08Z) - Autoencoding Generative Adversarial Networks [0.0]
I propose a four-network model which learns a mapping between a specified latent space and a given sample space.
The AEGAN technique offers several improvements to typical GAN training, including training stabilization, mode-collapse prevention, and permitting the directversa between real samples.
arXiv Detail & Related papers (2020-04-11T19:51:04Z) - Using Skill Rating as Fitness on the Evolution of GANs [0.4588028371034407]
Generative Adversarial Networks (GANs) are adversarial models that achieved impressive results on generative tasks.
GANs present some challenges regarding stability, making the training usually a hit-and-miss process.
Recent works proposed the use of evolutionary algorithms on GAN training, aiming to solve these challenges and to provide an automatic way to find good models.
arXiv Detail & Related papers (2020-04-09T20:26:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.