Using Skill Rating as Fitness on the Evolution of GANs
- URL: http://arxiv.org/abs/2004.04796v2
- Date: Sun, 31 Jan 2021 19:54:42 GMT
- Title: Using Skill Rating as Fitness on the Evolution of GANs
- Authors: Victor Costa, Nuno Louren\c{c}o, Jo\~ao Correia, Penousal Machado
- Abstract summary: Generative Adversarial Networks (GANs) are adversarial models that achieved impressive results on generative tasks.
GANs present some challenges regarding stability, making the training usually a hit-and-miss process.
Recent works proposed the use of evolutionary algorithms on GAN training, aiming to solve these challenges and to provide an automatic way to find good models.
- Score: 0.4588028371034407
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative Adversarial Networks (GANs) are an adversarial model that achieved
impressive results on generative tasks. In spite of the relevant results, GANs
present some challenges regarding stability, making the training usually a
hit-and-miss process. To overcome these challenges, several improvements were
proposed to better handle the internal characteristics of the model, such as
alternative loss functions or architectural changes on the neural networks used
by the generator and the discriminator. Recent works proposed the use of
evolutionary algorithms on GAN training, aiming to solve these challenges and
to provide an automatic way to find good models. In this context, COEGAN
proposes the use of coevolution and neuroevolution to orchestrate the training
of GANs. However, previous experiments detected that some of the fitness
functions used to guide the evolution are not ideal. In this work we propose
the evaluation of a game-based fitness function to be used within the COEGAN
method. Skill rating is a metric to quantify the skill of players in a game and
has already been used to evaluate GANs. We extend this idea using the skill
rating in an evolutionary algorithm to train GANs. The results show that skill
rating can be used as fitness to guide the evolution in COEGAN without the
dependence of an external evaluator.
Related papers
- REvolve: Reward Evolution with Large Language Models using Human Feedback [6.4550546442058225]
Large language models (LLMs) have been used for reward generation from natural language task descriptions.
LLMs, guided by human feedback, can be used to formulate reward functions that reflect human implicit knowledge.
We introduce REvolve, a truly evolutionary framework that uses LLMs for reward design in reinforcement learning.
arXiv Detail & Related papers (2024-06-03T13:23:27Z) - Robust Learning with Progressive Data Expansion Against Spurious
Correlation [65.83104529677234]
We study the learning process of a two-layer nonlinear convolutional neural network in the presence of spurious features.
Our analysis suggests that imbalanced data groups and easily learnable spurious features can lead to the dominance of spurious features during the learning process.
We propose a new training algorithm called PDE that efficiently enhances the model's robustness for a better worst-group performance.
arXiv Detail & Related papers (2023-06-08T05:44:06Z) - SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural
Networks [56.35403810762512]
Spiking neural networks (SNNs) with event-based computation are promising brain-inspired models for energy-efficient applications on neuromorphic hardware.
We study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method.
arXiv Detail & Related papers (2023-02-01T04:22:59Z) - Dissecting adaptive methods in GANs [46.90376306847234]
We study how adaptive methods help train generative adversarial networks (GANs)
By considering an update rule with the magnitude of the Adam update and the normalized direction of SGD, we empirically show that the adaptive magnitude of Adam is key for GAN training.
We prove that in that setting, GANs trained with nSGDA recover all the modes of the true distribution, whereas the same networks trained with SGDA (and any learning rate configuration) suffer from mode collapse.
arXiv Detail & Related papers (2022-10-09T19:00:07Z) - Convergence of GANs Training: A Game and Stochastic Control Methodology [4.933916728941277]
Training of generative adversarial networks (GANs) is known for its difficulty to converge.
This paper first confirms the lack of convexity in GANs objective functions, hence the well-posedness problem of GANs models.
In particular, it presents an optimal solution for adaptive learning rate which depends on the convexity of the objective function.
arXiv Detail & Related papers (2021-12-01T01:52:23Z) - IE-GAN: An Improved Evolutionary Generative Adversarial Network Using a
New Fitness Function and a Generic Crossover Operator [20.100388977505002]
We propose an improved E-GAN framework called IE-GAN, which introduces a new fitness function and a generic crossover operator.
In particular, the proposed fitness function can model the evolutionary process of individuals more accurately.
The crossover operator, which has been commonly adopted in evolutionary algorithms, can enable offspring to imitate the superior gene expression of their parents.
arXiv Detail & Related papers (2021-07-25T13:55:07Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Demonstrating the Evolution of GANs through t-SNE [0.4588028371034407]
Evolutionary algorithms, such as COEGAN, were recently proposed as a solution to improve the GAN training.
In this work, we propose an evaluation method based on t-distributed Neighbour Embedding (t-SNE) to assess the progress of GANs.
A metric based on the resulting t-SNE maps and the Jaccard index is proposed to represent the model quality.
arXiv Detail & Related papers (2021-01-31T20:07:08Z) - Evolutionary Generative Adversarial Networks with Crossover Based
Knowledge Distillation [4.044110325063562]
We propose a general crossover operator, which can be widely applied to GANs using evolutionary strategies.
We then design an evolutionary GAN framework C-GAN based on it.
And we combine the crossover operator with evolutionary generative adversarial networks (EGAN) to implement the evolutionary generative adversarial networks with crossover (CE-GAN)
arXiv Detail & Related papers (2021-01-27T03:24:30Z) - Training Generative Adversarial Networks by Solving Ordinary
Differential Equations [54.23691425062034]
We study the continuous-time dynamics induced by GAN training.
From this perspective, we hypothesise that instabilities in training GANs arise from the integration error.
We experimentally verify that well-known ODE solvers (such as Runge-Kutta) can stabilise training.
arXiv Detail & Related papers (2020-10-28T15:23:49Z) - Improving GAN Training with Probability Ratio Clipping and Sample
Reweighting [145.5106274085799]
generative adversarial networks (GANs) often suffer from inferior performance due to unstable training.
We propose a new variational GAN training framework which enjoys superior training stability.
By plugging the training approach in diverse state-of-the-art GAN architectures, we obtain significantly improved performance over a range of tasks.
arXiv Detail & Related papers (2020-06-12T01:39:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.