Exploring the Evolution of GANs through Quality Diversity
- URL: http://arxiv.org/abs/2007.06251v1
- Date: Mon, 13 Jul 2020 08:54:52 GMT
- Title: Exploring the Evolution of GANs through Quality Diversity
- Authors: Victor Costa, Nuno Louren\c{c}o, Jo\~ao Correia, Penousal Machado
- Abstract summary: We propose the application of a quality-diversity algorithm in the evolution of GANs.
We compare our proposal with the original COEGAN model and with an alternative version using a global competition approach.
- Score: 0.4588028371034407
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative adversarial networks (GANs) achieved relevant advances in the
field of generative algorithms, presenting high-quality results mainly in the
context of images. However, GANs are hard to train, and several aspects of the
model should be previously designed by hand to ensure training success. In this
context, evolutionary algorithms such as COEGAN were proposed to solve the
challenges in GAN training. Nevertheless, the lack of diversity and premature
optimization can be found in some of these solutions. We propose in this paper
the application of a quality-diversity algorithm in the evolution of GANs. The
solution is based on the Novelty Search with Local Competition (NSLC)
algorithm, adapting the concepts used in COEGAN to this new proposal. We
compare our proposal with the original COEGAN model and with an alternative
version using a global competition approach. The experimental results evidenced
that our proposal increases the diversity of the discovered solutions and
leverage the performance of the models found by the algorithm. Furthermore, the
global competition approach was able to consistently find better models for
GANs.
Related papers
- Diffusion Models as Network Optimizers: Explorations and Analysis [71.69869025878856]
generative diffusion models (GDMs) have emerged as a promising new approach to network optimization.
In this study, we first explore the intrinsic characteristics of generative models.
We provide a concise theoretical and intuitive demonstration of the advantages of generative models over discriminative network optimization.
arXiv Detail & Related papers (2024-11-01T09:05:47Z) - Illuminating the Diversity-Fitness Trade-Off in Black-Box Optimization [9.838618121102053]
In real-world applications, users often favor structurally diverse design choices over one high-quality solution.
This paper presents a fresh perspective on this challenge by considering the problem of identifying a fixed number of solutions with a pairwise distance above a specified threshold.
arXiv Detail & Related papers (2024-08-29T09:55:55Z) - DiffSG: A Generative Solver for Network Optimization with Diffusion Model [75.27274046562806]
Diffusion generative models can consider a broader range of solutions and exhibit stronger generalization by learning parameters.
We propose a new framework, which leverages intrinsic distribution learning of diffusion generative models to learn high-quality solutions.
arXiv Detail & Related papers (2024-08-13T07:56:21Z) - Faster Optimal Coalition Structure Generation via Offline Coalition Selection and Graph-Based Search [61.08720171136229]
We present a novel algorithm, SMART, for the problem based on a hybridization of three innovative techniques.
Two of these techniques are based on dynamic programming, where we show a powerful connection between the coalitions selected for evaluation and the performance of the algorithms.
Our techniques bring a new way of approaching the problem and a new level of precision to the field.
arXiv Detail & Related papers (2024-07-22T23:24:03Z) - Evaluating Ensemble Methods for News Recommender Systems [50.90330146667386]
This paper demonstrates how ensemble methods can be used to combine many diverse state-of-the-art algorithms to achieve superior results on the Microsoft News dataset (MIND)
Our findings demonstrate that a combination of NRS algorithms can outperform individual algorithms, provided that the base learners are sufficiently diverse.
arXiv Detail & Related papers (2024-06-23T13:40:50Z) - Accelerating the Evolutionary Algorithms by Gaussian Process Regression
with $\epsilon$-greedy acquisition function [2.7716102039510564]
We propose a novel method to estimate the elite individual to accelerate the convergence of optimization.
Our proposal has a broad prospect to estimate the elite individual and accelerate the convergence of optimization.
arXiv Detail & Related papers (2022-10-13T07:56:47Z) - Revisiting GANs by Best-Response Constraint: Perspective, Methodology,
and Application [49.66088514485446]
Best-Response Constraint (BRC) is a general learning framework to explicitly formulate the potential dependency of the generator on the discriminator.
We show that even with different motivations and formulations, a variety of existing GANs ALL can be uniformly improved by our flexible BRC methodology.
arXiv Detail & Related papers (2022-05-20T12:42:41Z) - A Generic Approach for Enhancing GANs by Regularized Latent Optimization [79.00740660219256]
We introduce a generic framework called em generative-model inference that is capable of enhancing pre-trained GANs effectively and seamlessly.
Our basic idea is to efficiently infer the optimal latent distribution for the given requirements using Wasserstein gradient flow techniques.
arXiv Detail & Related papers (2021-12-07T05:22:50Z) - Manifold Interpolation for Large-Scale Multi-Objective Optimization via
Generative Adversarial Networks [12.18471608552718]
Large-scale multiobjective optimization problems (LSMOPs) are characterized as involving hundreds or even thousands of decision variables and multiple conflicting objectives.
Previous research has shown that these optimal solutions are uniformly distributed on the manifold structure in the low-dimensional space.
In this work, a generative adversarial network (GAN)-based manifold framework is proposed to learn the manifold and generate high-quality solutions.
arXiv Detail & Related papers (2021-01-08T09:38:38Z) - Generative Adversarial Networks (GANs Survey): Challenges, Solutions,
and Future Directions [15.839877885431806]
Generative Adversarial Networks (GANs) is a novel class of deep generative models which has recently gained significant attention.
GANs learns complex and high-dimensional distributions implicitly over images, audio, and data.
There exists major challenges in training of GANs, i.e., mode collapse, non-convergence and instability.
arXiv Detail & Related papers (2020-04-30T19:26:46Z) - Large Scale Many-Objective Optimization Driven by Distributional
Adversarial Networks [1.2461503242570644]
We will propose a novel algorithm based on RVEA framework and using Distributional Adversarial Networks (DAN) to generate new offspring.
The propose new algorithm will be tested on 9 benchmark problems in Large scale multi-objective problems (LSMOP)
arXiv Detail & Related papers (2020-03-16T04:14:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.