Generate more than one child in your co-evolutionary semi-supervised learning GAN
- URL: http://arxiv.org/abs/2504.20560v1
- Date: Tue, 29 Apr 2025 09:04:22 GMT
- Title: Generate more than one child in your co-evolutionary semi-supervised learning GAN
- Authors: Francisco SedeƱo, Jamal Toutouh, Francisco Chicano,
- Abstract summary: SSL-GAN has attracted many researchers in the last decade.<n>Co-evolutionary approaches have been applied where the two networks of a GAN are evolved in separate populations.<n>We propose a new co-evolutionary approach, called Co-evolutionary Elitist SSL-GAN (CE-SSLGAN), with panmictic population, elitist replacement, and more than one individual in the offspring.
- Score: 1.3927943269211591
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative Adversarial Networks (GANs) are very useful methods to address semi-supervised learning (SSL) datasets, thanks to their ability to generate samples similar to real data. This approach, called SSL-GAN has attracted many researchers in the last decade. Evolutionary algorithms have been used to guide the evolution and training of SSL-GANs with great success. In particular, several co-evolutionary approaches have been applied where the two networks of a GAN (the generator and the discriminator) are evolved in separate populations. The co-evolutionary approaches published to date assume some spatial structure of the populations, based on the ideas of cellular evolutionary algorithms. They also create one single individual per generation and follow a generational replacement strategy in the evolution. In this paper, we re-consider those algorithmic design decisions and propose a new co-evolutionary approach, called Co-evolutionary Elitist SSL-GAN (CE-SSLGAN), with panmictic population, elitist replacement, and more than one individual in the offspring. We evaluate the performance of our proposed method using three standard benchmark datasets. The results show that creating more than one offspring per population and using elitism improves the results in comparison with a classical SSL-GAN.
Related papers
- Nature-Inspired Population-Based Evolution of Large Language Models [58.81047484922555]
This paper formally defines a newly emerging problem -- the population-based evolution of large language models (LLMs)<n>Our framework enables the population to evolve through four key operations.<n> Experiments on 12 datasets show that our framework consistently outperforms existing multi-LLM merging and adaptation methods.
arXiv Detail & Related papers (2025-03-03T04:03:31Z) - A two-stage algorithm in evolutionary product unit neural networks for
classification [0.0]
This paper presents a procedure to add broader diversity at the beginning of the evolutionary process.
It consists of creating two initial populations with different parameter settings, evolving them for a small number of generations, selecting the best individuals from each population in the same proportion and combining them to constitute a new initial population.
arXiv Detail & Related papers (2024-02-09T18:56:07Z) - DARLEI: Deep Accelerated Reinforcement Learning with Evolutionary
Intelligence [77.78795329701367]
We present DARLEI, a framework that combines evolutionary algorithms with parallelized reinforcement learning.
We characterize DARLEI's performance under various conditions, revealing factors impacting diversity of evolved morphologies.
We hope to extend DARLEI in future work to include interactions between diverse morphologies in richer environments.
arXiv Detail & Related papers (2023-12-08T16:51:10Z) - EGANS: Evolutionary Generative Adversarial Network Search for Zero-Shot
Learning [13.275693216436494]
We propose evolutionary generative adversarial network search (EGANS) to automatically design the generative network with good adaptation and stability.
EGANS is learned by two stages: evolution generator architecture search and evolution discriminator architecture search.
Experiments show that EGANS consistently improve existing generative ZSL methods on the standard CUB, SUN, AWA2 and FLO datasets.
arXiv Detail & Related papers (2023-08-19T05:47:03Z) - Improving Out-of-Distribution Robustness of Classifiers via Generative
Interpolation [56.620403243640396]
Deep neural networks achieve superior performance for learning from independent and identically distributed (i.i.d.) data.
However, their performance deteriorates significantly when handling out-of-distribution (OoD) data.
We develop a simple yet effective method called Generative Interpolation to fuse generative models trained from multiple domains for synthesizing diverse OoD samples.
arXiv Detail & Related papers (2023-07-23T03:53:53Z) - LD-GAN: Low-Dimensional Generative Adversarial Network for Spectral
Image Generation with Variance Regularization [72.4394510913927]
Deep learning methods are state-of-the-art for spectral image (SI) computational tasks.
GANs enable diverse augmentation by learning and sampling from the data distribution.
GAN-based SI generation is challenging since the high-dimensionality nature of this kind of data hinders the convergence of the GAN training yielding to suboptimal generation.
We propose a statistical regularization to control the low-dimensional representation variance for the autoencoder training and to achieve high diversity of samples generated with the GAN.
arXiv Detail & Related papers (2023-04-29T00:25:02Z) - Exploring Generative Adversarial Networks for Text-to-Image Generation
with Evolution Strategies [0.4588028371034407]
Some methods rely on pre-trained models such as Generative Adversarial Networks, searching through the latent space of the generative model.
We propose the use of Covariance Matrix Adaptation Evolution Strategy to explore the latent space of Generative Adversarial Networks.
We show that the hybrid method combines the explored areas of the gradient-based and evolutionary approaches, leveraging the quality of the results.
arXiv Detail & Related papers (2022-07-06T18:28:47Z) - Epigenetic evolution of deep convolutional models [81.21462458089142]
We build upon a previously proposed neuroevolution framework to evolve deep convolutional models.
We propose a convolutional layer layout which allows kernels of different shapes and sizes to coexist within the same layer.
The proposed layout enables the size and shape of individual kernels within a convolutional layer to be evolved with a corresponding new mutation operator.
arXiv Detail & Related papers (2021-04-12T12:45:16Z) - Self-Supervised Learning of Graph Neural Networks: A Unified Review [50.71341657322391]
Self-supervised learning is emerging as a new paradigm for making use of large amounts of unlabeled samples.
We provide a unified review of different ways of training graph neural networks (GNNs) using SSL.
Our treatment of SSL methods for GNNs sheds light on the similarities and differences of various methods, setting the stage for developing new methods and algorithms.
arXiv Detail & Related papers (2021-02-22T03:43:45Z) - Evolutionary Generative Adversarial Networks with Crossover Based
Knowledge Distillation [4.044110325063562]
We propose a general crossover operator, which can be widely applied to GANs using evolutionary strategies.
We then design an evolutionary GAN framework C-GAN based on it.
And we combine the crossover operator with evolutionary generative adversarial networks (EGAN) to implement the evolutionary generative adversarial networks with crossover (CE-GAN)
arXiv Detail & Related papers (2021-01-27T03:24:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.