Host-Pathongen Co-evolution Inspired Algorithm Enables Robust GAN
Training
- URL: http://arxiv.org/abs/2006.04720v2
- Date: Tue, 9 Jun 2020 11:21:03 GMT
- Title: Host-Pathongen Co-evolution Inspired Algorithm Enables Robust GAN
Training
- Authors: Andrei Kucharavy (1), El Mahdi El Mhamdi (1) and Rachid Guerraoui (1)
((1) Ecole Polytechnique Federale de Lausanne, Lausanne, Switzerland)
- Abstract summary: Generative adversarial networks (GANs) are pairs of artificial neural networks that are trained one against each other.
GANs have allowed for the generation of impressive imitations of real-life films, images and texts, whose fakeness is barely noticeable to humans.
We propose a more robust algorithm for GANs training. We empirically show the increased stability and a better ability to generate high-quality images while using less computational power.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative adversarial networks (GANs) are pairs of artificial neural
networks that are trained one against each other. The outputs from a generator
are mixed with the real-world inputs to the discriminator and both networks are
trained until an equilibrium is reached, where the discriminator cannot
distinguish generated inputs from real ones. Since their introduction, GANs
have allowed for the generation of impressive imitations of real-life films,
images and texts, whose fakeness is barely noticeable to humans. Despite their
impressive performance, training GANs remains to this day more of an art than a
reliable procedure, in a large part due to training process stability.
Generators are susceptible to mode dropping and convergence to random patterns,
which have to be mitigated by computationally expensive multiple restarts.
Curiously, GANs bear an uncanny similarity to a co-evolution of a pathogen and
its host's immune system in biology. In a biological context, the majority of
potential pathogens indeed never make it and are kept at bay by the hots'
immune system. Yet some are efficient enough to present a risk of a serious
condition and recurrent infections. Here, we explore that similarity to propose
a more robust algorithm for GANs training. We empirically show the increased
stability and a better ability to generate high-quality images while using less
computational power.
Related papers
- StealthDiffusion: Towards Evading Diffusion Forensic Detection through Diffusion Model [62.25424831998405]
StealthDiffusion is a framework that modifies AI-generated images into high-quality, imperceptible adversarial examples.
It is effective in both white-box and black-box settings, transforming AI-generated images into high-quality adversarial forgeries.
arXiv Detail & Related papers (2024-08-11T01:22:29Z) - Breaking Free: How to Hack Safety Guardrails in Black-Box Diffusion Models! [52.0855711767075]
EvoSeed is an evolutionary strategy-based algorithmic framework for generating photo-realistic natural adversarial samples.
We employ CMA-ES to optimize the search for an initial seed vector, which, when processed by the Conditional Diffusion Model, results in the natural adversarial sample misclassified by the Model.
Experiments show that generated adversarial images are of high image quality, raising concerns about generating harmful content bypassing safety classifiers.
arXiv Detail & Related papers (2024-02-07T09:39:29Z) - Effective Dynamics of Generative Adversarial Networks [16.51305515824504]
Generative adversarial networks (GANs) are a class of machine-learning models that use adversarial training to generate new samples.
One major form of training failure, known as mode collapse, involves the generator failing to reproduce the full diversity of modes in the target probability distribution.
We present an effective model of GAN training, which captures the learning dynamics by replacing the generator neural network with a collection of particles in the output space.
arXiv Detail & Related papers (2022-12-08T22:04:01Z) - IE-GAN: An Improved Evolutionary Generative Adversarial Network Using a
New Fitness Function and a Generic Crossover Operator [20.100388977505002]
We propose an improved E-GAN framework called IE-GAN, which introduces a new fitness function and a generic crossover operator.
In particular, the proposed fitness function can model the evolutionary process of individuals more accurately.
The crossover operator, which has been commonly adopted in evolutionary algorithms, can enable offspring to imitate the superior gene expression of their parents.
arXiv Detail & Related papers (2021-07-25T13:55:07Z) - Fostering Diversity in Spatial Evolutionary Generative Adversarial
Networks [10.603020431394157]
This article introduces Mustangs, a spatially distributed CoE-GAN, which fosters diversity by using different loss functions during the training.
Experimental analysis on MNIST and CelebA demonstrated that Mustangs trains statistically more accurate generators.
arXiv Detail & Related papers (2021-06-25T12:40:36Z) - HumanACGAN: conditional generative adversarial network with human-based
auxiliary classifier and its evaluation in phoneme perception [52.76447516087089]
We propose a conditional generative adversarial network (GAN) incorporating humans' perceptual evaluations.
A deep neural network (DNN)-based generator of a GAN can represent a real-data distribution accurately but can never represent a human-acceptable distribution.
This paper proposes the HumanACGAN, a theoretical extension of the HumanGAN, to deal with conditional human-acceptable distributions.
arXiv Detail & Related papers (2021-02-08T08:25:29Z) - Evolutionary Generative Adversarial Networks with Crossover Based
Knowledge Distillation [4.044110325063562]
We propose a general crossover operator, which can be widely applied to GANs using evolutionary strategies.
We then design an evolutionary GAN framework C-GAN based on it.
And we combine the crossover operator with evolutionary generative adversarial networks (EGAN) to implement the evolutionary generative adversarial networks with crossover (CE-GAN)
arXiv Detail & Related papers (2021-01-27T03:24:30Z) - A multi-agent evolutionary robotics framework to train spiking neural
networks [35.90048588096738]
A novel multi-agent evolutionary robotics (ER) based framework is demonstrated for training Spiking Neural Networks (SNNs)
The weights of a population of SNNs along with morphological parameters of bots they control are treated as phenotypes.
Rules of the framework select certain bots and their SNNs for reproduction and others for elimination based on their efficacy in capturing food in a competitive environment.
arXiv Detail & Related papers (2020-12-07T07:26:52Z) - Learning Efficient GANs for Image Translation via Differentiable Masks
and co-Attention Distillation [130.30465659190773]
Generative Adversarial Networks (GANs) have been widely-used in image translation, but their high computation and storage costs impede the deployment on mobile devices.
We introduce a novel GAN compression method, termed DMAD, by proposing a Differentiable Mask and a co-Attention Distillation.
Experiments show DMAD can reduce the Multiply Accumulate Operations (MACs) of CycleGAN by 13x and that of Pix2Pix by 4x while retaining a comparable performance against the full model.
arXiv Detail & Related papers (2020-11-17T02:39:19Z) - Improving GAN Training with Probability Ratio Clipping and Sample
Reweighting [145.5106274085799]
generative adversarial networks (GANs) often suffer from inferior performance due to unstable training.
We propose a new variational GAN training framework which enjoys superior training stability.
By plugging the training approach in diverse state-of-the-art GAN architectures, we obtain significantly improved performance over a range of tasks.
arXiv Detail & Related papers (2020-06-12T01:39:48Z) - Feature Purification: How Adversarial Training Performs Robust Deep
Learning [66.05472746340142]
We show a principle that we call Feature Purification, where we show one of the causes of the existence of adversarial examples is the accumulation of certain small dense mixtures in the hidden weights during the training process of a neural network.
We present both experiments on the CIFAR-10 dataset to illustrate this principle, and a theoretical result proving that for certain natural classification tasks, training a two-layer neural network with ReLU activation using randomly gradient descent indeed this principle.
arXiv Detail & Related papers (2020-05-20T16:56:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.