Analyzing the Components of Distributed Coevolutionary GAN Training
- URL: http://arxiv.org/abs/2008.01124v1
- Date: Mon, 3 Aug 2020 18:35:06 GMT
- Title: Analyzing the Components of Distributed Coevolutionary GAN Training
- Authors: Jamal Toutouh, Erik Hemberg, and Una-May O'Reilly
- Abstract summary: We investigate the impact on the performance of two algorithm components that influence the diversity during coevolution.
In experiments on MNIST dataset, we find that the combination of these two components provides the best generative models.
- Score: 8.198369743955528
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Distributed coevolutionary Generative Adversarial Network (GAN) training has
empirically shown success in overcoming GAN training pathologies. This is
mainly due to diversity maintenance in the populations of generators and
discriminators during the training process. The method studied here coevolves
sub-populations on each cell of a spatial grid organized into overlapping Moore
neighborhoods. We investigate the impact on the performance of two algorithm
components that influence the diversity during coevolution: the
performance-based selection/replacement inside each sub-population and the
communication through migration of solutions (networks) among overlapping
neighborhoods. In experiments on MNIST dataset, we find that the combination of
these two components provides the best generative models. In addition,
migrating solutions without applying selection in the sub-populations achieves
competitive results, while selection without communication between cells
reduces performance.
Related papers
- A two-stage algorithm in evolutionary product unit neural networks for
classification [0.0]
This paper presents a procedure to add broader diversity at the beginning of the evolutionary process.
It consists of creating two initial populations with different parameter settings, evolving them for a small number of generations, selecting the best individuals from each population in the same proportion and combining them to constitute a new initial population.
arXiv Detail & Related papers (2024-02-09T18:56:07Z) - Population-Based Evolutionary Gaming for Unsupervised Person
Re-identification [26.279581599246224]
Unsupervised person re-identification has achieved great success through the self-improvement of individual neural networks.
We develop a population-based evolutionary gaming (PEG) framework in which a population of diverse neural networks is trained concurrently through selection, reproduction, mutation, and population mutual learning.
PEG produces new state-of-the-art accuracy for person re-identification, indicating the great potential of population-based network cooperative training for unsupervised learning.
arXiv Detail & Related papers (2023-06-08T14:33:41Z) - Tackling Long-Tailed Category Distribution Under Domain Shifts [50.21255304847395]
Existing approaches cannot handle the scenario where both issues exist.
We designed three novel core functional blocks including Distribution Calibrated Classification Loss, Visual-Semantic Mapping and Semantic-Similarity Guided Augmentation.
Two new datasets were proposed for this problem, named AWA2-LTS and ImageNet-LTS.
arXiv Detail & Related papers (2022-07-20T19:07:46Z) - Heterogeneous Federated Learning via Grouped Sequential-to-Parallel
Training [60.892342868936865]
Federated learning (FL) is a rapidly growing privacy-preserving collaborative machine learning paradigm.
We propose a data heterogeneous-robust FL approach, FedGSP, to address this challenge.
We show that FedGSP improves the accuracy by 3.7% on average compared with seven state-of-the-art approaches.
arXiv Detail & Related papers (2022-01-31T03:15:28Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - MineGAN++: Mining Generative Models for Efficient Knowledge Transfer to
Limited Data Domains [77.46963293257912]
We propose a novel knowledge transfer method for generative models based on mining the knowledge that is most beneficial to a specific target domain.
This is done using a miner network that identifies which part of the generative distribution of each pretrained GAN outputs samples closest to the target domain.
We show that the proposed method, called MineGAN, effectively transfers knowledge to domains with few target images, outperforming existing methods.
arXiv Detail & Related papers (2021-04-28T13:10:56Z) - Training Generative Adversarial Networks in One Stage [58.983325666852856]
We introduce a general training scheme that enables training GANs efficiently in only one stage.
We show that the proposed method is readily applicable to other adversarial-training scenarios, such as data-free knowledge distillation.
arXiv Detail & Related papers (2021-02-28T09:03:39Z) - Signal Propagation in a Gradient-Based and Evolutionary Learning System [9.911708222650825]
Coevolutionary algorithms (CEAs) for GAN training are empirically robust to them.
We propose Lipi-Ring, a distributed CEA like Lipizzaner, except that it uses a different spatial topology.
Our central question is whether the different directionality of signal propagation meets or exceeds the performance quality and training efficiency of Lipizzaner.
arXiv Detail & Related papers (2021-02-10T16:46:44Z) - Demonstrating the Evolution of GANs through t-SNE [0.4588028371034407]
Evolutionary algorithms, such as COEGAN, were recently proposed as a solution to improve the GAN training.
In this work, we propose an evaluation method based on t-distributed Neighbour Embedding (t-SNE) to assess the progress of GANs.
A metric based on the resulting t-SNE maps and the Jaccard index is proposed to represent the model quality.
arXiv Detail & Related papers (2021-01-31T20:07:08Z) - Parallel/distributed implementation of cellular training for generative
adversarial neural networks [7.504722086511921]
Generative adversarial networks (GANs) are widely used to learn generative models.
This article presents a parallel/distributed implementation of a cellular competitive coevolutionary method to train two populations of GANs.
arXiv Detail & Related papers (2020-04-07T16:01:58Z) - Contradictory Structure Learning for Semi-supervised Domain Adaptation [67.89665267469053]
Current adversarial adaptation methods attempt to align the cross-domain features.
Two challenges remain unsolved: 1) the conditional distribution mismatch and 2) the bias of the decision boundary towards the source domain.
We propose a novel framework for semi-supervised domain adaptation by unifying the learning of opposite structures.
arXiv Detail & Related papers (2020-02-06T22:58:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.