Demonstrating the Evolution of GANs through t-SNE
- URL: http://arxiv.org/abs/2102.00524v2
- Date: Thu, 25 Feb 2021 10:49:30 GMT
- Title: Demonstrating the Evolution of GANs through t-SNE
- Authors: Victor Costa, Nuno Louren\c{c}o, Jo\~ao Correia, Penousal Machado
- Abstract summary: Evolutionary algorithms, such as COEGAN, were recently proposed as a solution to improve the GAN training.
In this work, we propose an evaluation method based on t-distributed Neighbour Embedding (t-SNE) to assess the progress of GANs.
A metric based on the resulting t-SNE maps and the Jaccard index is proposed to represent the model quality.
- Score: 0.4588028371034407
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative Adversarial Networks (GANs) are powerful generative models that
achieved strong results, mainly in the image domain. However, the training of
GANs is not trivial, presenting some challenges tackled by different
strategies. Evolutionary algorithms, such as COEGAN, were recently proposed as
a solution to improve the GAN training, overcoming common problems that affect
the model, such as vanishing gradient and mode collapse. In this work, we
propose an evaluation method based on t-distributed Stochastic Neighbour
Embedding (t-SNE) to assess the progress of GANs and visualize the distribution
learned by generators in training. We propose the use of the feature space
extracted from trained discriminators to evaluate samples produced by
generators and from the input dataset. A metric based on the resulting t-SNE
maps and the Jaccard index is proposed to represent the model quality.
Experiments were conducted to assess the progress of GANs when trained using
COEGAN. The results show both by visual inspection and metrics that the
Evolutionary Algorithm gradually improves discriminators and generators through
generations, avoiding problems such as mode collapse.
Related papers
- GE-AdvGAN: Improving the transferability of adversarial samples by
gradient editing-based adversarial generative model [69.71629949747884]
Adversarial generative models, such as Generative Adversarial Networks (GANs), are widely applied for generating various types of data.
In this work, we propose a novel algorithm named GE-AdvGAN to enhance the transferability of adversarial samples.
arXiv Detail & Related papers (2024-01-11T16:43:16Z) - Time-series Generation by Contrastive Imitation [87.51882102248395]
We study a generative framework that seeks to combine the strengths of both: Motivated by a moment-matching objective to mitigate compounding error, we optimize a local (but forward-looking) transition policy.
At inference, the learned policy serves as the generator for iterative sampling, and the learned energy serves as a trajectory-level measure for evaluating sample quality.
arXiv Detail & Related papers (2023-11-02T16:45:25Z) - DuDGAN: Improving Class-Conditional GANs via Dual-Diffusion [2.458437232470188]
Class-conditional image generation using generative adversarial networks (GANs) has been investigated through various techniques.
We propose a novel approach for class-conditional image generation using GANs called DuDGAN, which incorporates a dual diffusion-based noise injection process.
Our method outperforms state-of-the-art conditional GAN models for image generation in terms of performance.
arXiv Detail & Related papers (2023-05-24T07:59:44Z) - Exploring Generative Adversarial Networks for Text-to-Image Generation
with Evolution Strategies [0.4588028371034407]
Some methods rely on pre-trained models such as Generative Adversarial Networks, searching through the latent space of the generative model.
We propose the use of Covariance Matrix Adaptation Evolution Strategy to explore the latent space of Generative Adversarial Networks.
We show that the hybrid method combines the explored areas of the gradient-based and evolutionary approaches, leveraging the quality of the results.
arXiv Detail & Related papers (2022-07-06T18:28:47Z) - Training Discrete Deep Generative Models via Gapped Straight-Through
Estimator [72.71398034617607]
We propose a Gapped Straight-Through ( GST) estimator to reduce the variance without incurring resampling overhead.
This estimator is inspired by the essential properties of Straight-Through Gumbel-Softmax.
Experiments demonstrate that the proposed GST estimator enjoys better performance compared to strong baselines on two discrete deep generative modeling tasks.
arXiv Detail & Related papers (2022-06-15T01:46:05Z) - Overcoming Mode Collapse with Adaptive Multi Adversarial Training [5.09817514580101]
Generative Adversarial Networks (GANs) are a class of generative models used for various applications.
GANs have been known to suffer from the mode collapse problem, in which some modes of the target distribution are ignored by the generator.
We introduce a novel training procedure that adaptively spawns additional discriminators to remember previous modes of generation.
arXiv Detail & Related papers (2021-12-29T05:57:55Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - HGAN: Hybrid Generative Adversarial Network [25.940501417539416]
We propose a hybrid generative adversarial network (HGAN) for which we can enforce data density estimation via an autoregressive model.
A novel deep architecture within the GAN formulation is developed to adversarially distill the autoregressive model information in addition to simple GAN training approach.
arXiv Detail & Related papers (2021-02-07T03:54:12Z) - Unsupervised Controllable Generation with Self-Training [90.04287577605723]
controllable generation with GANs remains a challenging research problem.
We propose an unsupervised framework to learn a distribution of latent codes that control the generator through self-training.
Our framework exhibits better disentanglement compared to other variants such as the variational autoencoder.
arXiv Detail & Related papers (2020-07-17T21:50:35Z) - Top-k Training of GANs: Improving GAN Performance by Throwing Away Bad
Samples [67.11669996924671]
We introduce a simple (one line of code) modification to the Generative Adversarial Network (GAN) training algorithm.
When updating the generator parameters, we zero out the gradient contributions from the elements of the batch that the critic scores as least realistic'
We show that this top-k update' procedure is a generally applicable improvement.
arXiv Detail & Related papers (2020-02-14T19:27:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.