Autoencoding Generative Adversarial Networks
- URL: http://arxiv.org/abs/2004.05472v1
- Date: Sat, 11 Apr 2020 19:51:04 GMT
- Title: Autoencoding Generative Adversarial Networks
- Authors: Conor Lazarou
- Abstract summary: I propose a four-network model which learns a mapping between a specified latent space and a given sample space.
The AEGAN technique offers several improvements to typical GAN training, including training stabilization, mode-collapse prevention, and permitting the directversa between real samples.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the years since Goodfellow et al. introduced Generative Adversarial
Networks (GANs), there has been an explosion in the breadth and quality of
generative model applications. Despite this work, GANs still have a long way to
go before they see mainstream adoption, owing largely to their infamous
training instability. Here I propose the Autoencoding Generative Adversarial
Network (AEGAN), a four-network model which learns a bijective mapping between
a specified latent space and a given sample space by applying an adversarial
loss and a reconstruction loss to both the generated images and the generated
latent vectors. The AEGAN technique offers several improvements to typical GAN
training, including training stabilization, mode-collapse prevention, and
permitting the direct interpolation between real samples. The effectiveness of
the technique is illustrated using an anime face dataset.
Related papers
- DuDGAN: Improving Class-Conditional GANs via Dual-Diffusion [2.458437232470188]
Class-conditional image generation using generative adversarial networks (GANs) has been investigated through various techniques.
We propose a novel approach for class-conditional image generation using GANs called DuDGAN, which incorporates a dual diffusion-based noise injection process.
Our method outperforms state-of-the-art conditional GAN models for image generation in terms of performance.
arXiv Detail & Related papers (2023-05-24T07:59:44Z) - Generative Cooperative Networks for Natural Language Generation [25.090455367573988]
We introduce Generative Cooperative Networks, in which the discriminator architecture is cooperatively used along with the generation policy to output samples of realistic texts.
We give theoretical guarantees of convergence for our approach, and study various efficient decoding schemes to empirically achieve state-of-the-art results in two main NLG tasks.
arXiv Detail & Related papers (2022-01-28T18:36:57Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Training Generative Adversarial Networks in One Stage [58.983325666852856]
We introduce a general training scheme that enables training GANs efficiently in only one stage.
We show that the proposed method is readily applicable to other adversarial-training scenarios, such as data-free knowledge distillation.
arXiv Detail & Related papers (2021-02-28T09:03:39Z) - Guiding GANs: How to control non-conditional pre-trained GANs for
conditional image generation [69.10717733870575]
We present a novel method for guiding generic non-conditional GANs to behave as conditional GANs.
Our approach adds into the mix an encoder network to generate the high-dimensional random input that are fed to the generator network of a non-conditional GAN.
arXiv Detail & Related papers (2021-01-04T14:03:32Z) - Generative Adversarial Stacked Autoencoders [3.1829446824051195]
We propose a Generative Adversarial Stacked Convolutional Autoencoder(GASCA) model and a generative adversarial gradual greedy layer-wise learning algorithm de-signed to train Adversarial Autoencoders.
Our training approach produces images with significantly lower reconstruction error than vanilla joint training.
arXiv Detail & Related papers (2020-11-22T17:51:59Z) - Learning Efficient GANs for Image Translation via Differentiable Masks
and co-Attention Distillation [130.30465659190773]
Generative Adversarial Networks (GANs) have been widely-used in image translation, but their high computation and storage costs impede the deployment on mobile devices.
We introduce a novel GAN compression method, termed DMAD, by proposing a Differentiable Mask and a co-Attention Distillation.
Experiments show DMAD can reduce the Multiply Accumulate Operations (MACs) of CycleGAN by 13x and that of Pix2Pix by 4x while retaining a comparable performance against the full model.
arXiv Detail & Related papers (2020-11-17T02:39:19Z) - Encoding Robustness to Image Style via Adversarial Feature Perturbations [72.81911076841408]
We adapt adversarial training by directly perturbing feature statistics, rather than image pixels, to produce robust models.
Our proposed method, Adversarial Batch Normalization (AdvBN), is a single network layer that generates worst-case feature perturbations during training.
arXiv Detail & Related papers (2020-09-18T17:52:34Z) - Novelty Detection via Non-Adversarial Generative Network [47.375591404354765]
A novel decoder-encoder framework is proposed for novelty detection task.
Under the non-adversarial framework, both latent space and image reconstruction space are jointly optimized.
Our model has the clear superiority over cutting-edge novelty detectors and achieves the state-of-the-art results on the datasets.
arXiv Detail & Related papers (2020-02-03T01:05:59Z) - Generative Adversarial Trainer: Defense to Adversarial Perturbations
with GAN [13.561553183983774]
We propose a novel technique to make neural network robust to adversarial examples using a generative adversarial network.
The generator network generates an adversarial perturbation that can easily fool the classifier network by using a gradient of each image.
Our adversarial training framework efficiently reduces overfitting and outperforms other regularization methods such as Dropout.
arXiv Detail & Related papers (2017-05-09T15:30:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.