Generative Adversarial Stacked Autoencoders
- URL: http://arxiv.org/abs/2011.12236v1
- Date: Sun, 22 Nov 2020 17:51:59 GMT
- Title: Generative Adversarial Stacked Autoencoders
- Authors: Ariel Ruiz-Garcia, Ibrahim Almakky, Vasile Palade, Luke Hicks
- Abstract summary: We propose a Generative Adversarial Stacked Convolutional Autoencoder(GASCA) model and a generative adversarial gradual greedy layer-wise learning algorithm de-signed to train Adversarial Autoencoders.
Our training approach produces images with significantly lower reconstruction error than vanilla joint training.
- Score: 3.1829446824051195
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative Adversarial Networks (GANs) have become predominant in image
generation tasks. Their success is attributed to the training regime which
employs two models: a generator G and discriminator D that compete in a minimax
zero sum game. Nonetheless, GANs are difficult to train due to their
sensitivity to hyperparameter and parameter initialisation, which often leads
to vanishing gradients, non-convergence, or mode collapse, where the generator
is unable to create samples with different variations. In this work, we propose
a novel Generative Adversarial Stacked Convolutional Autoencoder(GASCA) model
and a generative adversarial gradual greedy layer-wise learning algorithm
de-signed to train Adversarial Autoencoders in an efficient and incremental
manner. Our training approach produces images with significantly lower
reconstruction error than vanilla joint training.
Related papers
- GE-AdvGAN: Improving the transferability of adversarial samples by
gradient editing-based adversarial generative model [69.71629949747884]
Adversarial generative models, such as Generative Adversarial Networks (GANs), are widely applied for generating various types of data.
In this work, we propose a novel algorithm named GE-AdvGAN to enhance the transferability of adversarial samples.
arXiv Detail & Related papers (2024-01-11T16:43:16Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - LD-GAN: Low-Dimensional Generative Adversarial Network for Spectral
Image Generation with Variance Regularization [72.4394510913927]
Deep learning methods are state-of-the-art for spectral image (SI) computational tasks.
GANs enable diverse augmentation by learning and sampling from the data distribution.
GAN-based SI generation is challenging since the high-dimensionality nature of this kind of data hinders the convergence of the GAN training yielding to suboptimal generation.
We propose a statistical regularization to control the low-dimensional representation variance for the autoencoder training and to achieve high diversity of samples generated with the GAN.
arXiv Detail & Related papers (2023-04-29T00:25:02Z) - Time Efficient Training of Progressive Generative Adversarial Network
using Depthwise Separable Convolution and Super Resolution Generative
Adversarial Network [0.0]
We propose a novel pipeline that combines Progressive GAN with slight modifications and Super Resolution GAN.
Super Resolution GAN up samples low-resolution images to high-resolution images which can prove to be a useful resource to reduce the training time exponentially.
arXiv Detail & Related papers (2022-02-24T19:53:37Z) - Generative Cooperative Networks for Natural Language Generation [25.090455367573988]
We introduce Generative Cooperative Networks, in which the discriminator architecture is cooperatively used along with the generation policy to output samples of realistic texts.
We give theoretical guarantees of convergence for our approach, and study various efficient decoding schemes to empirically achieve state-of-the-art results in two main NLG tasks.
arXiv Detail & Related papers (2022-01-28T18:36:57Z) - Conditional Variational Autoencoder with Balanced Pre-training for
Generative Adversarial Networks [11.46883762268061]
Class imbalance occurs in many real-world applications, including image classification, where the number of images in each class differs significantly.
With imbalanced data, the generative adversarial networks (GANs) leans to majority class samples.
We propose a novel Variational Autoencoder with Balanced Pre-training for Geneversarative Adrial Networks (CAPGAN) as an augmentation tool to generate realistic synthetic images.
arXiv Detail & Related papers (2022-01-13T06:52:58Z) - Guiding GANs: How to control non-conditional pre-trained GANs for
conditional image generation [69.10717733870575]
We present a novel method for guiding generic non-conditional GANs to behave as conditional GANs.
Our approach adds into the mix an encoder network to generate the high-dimensional random input that are fed to the generator network of a non-conditional GAN.
arXiv Detail & Related papers (2021-01-04T14:03:32Z) - Learning Efficient GANs for Image Translation via Differentiable Masks
and co-Attention Distillation [130.30465659190773]
Generative Adversarial Networks (GANs) have been widely-used in image translation, but their high computation and storage costs impede the deployment on mobile devices.
We introduce a novel GAN compression method, termed DMAD, by proposing a Differentiable Mask and a co-Attention Distillation.
Experiments show DMAD can reduce the Multiply Accumulate Operations (MACs) of CycleGAN by 13x and that of Pix2Pix by 4x while retaining a comparable performance against the full model.
arXiv Detail & Related papers (2020-11-17T02:39:19Z) - Unsupervised Controllable Generation with Self-Training [90.04287577605723]
controllable generation with GANs remains a challenging research problem.
We propose an unsupervised framework to learn a distribution of latent codes that control the generator through self-training.
Our framework exhibits better disentanglement compared to other variants such as the variational autoencoder.
arXiv Detail & Related papers (2020-07-17T21:50:35Z) - Autoencoding Generative Adversarial Networks [0.0]
I propose a four-network model which learns a mapping between a specified latent space and a given sample space.
The AEGAN technique offers several improvements to typical GAN training, including training stabilization, mode-collapse prevention, and permitting the directversa between real samples.
arXiv Detail & Related papers (2020-04-11T19:51:04Z) - Generative Adversarial Trainer: Defense to Adversarial Perturbations
with GAN [13.561553183983774]
We propose a novel technique to make neural network robust to adversarial examples using a generative adversarial network.
The generator network generates an adversarial perturbation that can easily fool the classifier network by using a gradient of each image.
Our adversarial training framework efficiently reduces overfitting and outperforms other regularization methods such as Dropout.
arXiv Detail & Related papers (2017-05-09T15:30:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.