Evolving GAN Formulations for Higher Quality Image Synthesis
- URL: http://arxiv.org/abs/2102.08578v1
- Date: Wed, 17 Feb 2021 05:11:21 GMT
- Title: Evolving GAN Formulations for Higher Quality Image Synthesis
- Authors: Santiago Gonzalez and Mohak Kant and Risto Miikkulainen
- Abstract summary: Generative Adversarial Networks (GANs) have extended deep learning to complex generation and translation tasks.
GANs are notoriously difficult to train: Mode collapse and other instabilities in the training process often degrade the quality of the generated results.
This paper presents a new technique called TaylorGAN for improving GANs by discovering customized loss functions for each of its two networks.
- Score: 15.861807854144228
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative Adversarial Networks (GANs) have extended deep learning to complex
generation and translation tasks across different data modalities. However,
GANs are notoriously difficult to train: Mode collapse and other instabilities
in the training process often degrade the quality of the generated results,
such as images. This paper presents a new technique called TaylorGAN for
improving GANs by discovering customized loss functions for each of its two
networks. The loss functions are parameterized as Taylor expansions and
optimized through multiobjective evolution. On an image-to-image translation
benchmark task, this approach qualitatively improves generated image quality
and quantitatively improves two independent GAN performance metrics. It
therefore forms a promising approach for applying GANs to more challenging
tasks in the future.
Related papers
- G-Refine: A General Quality Refiner for Text-to-Image Generation [74.16137826891827]
We introduce G-Refine, a general image quality refiner designed to enhance low-quality images without compromising integrity of high-quality ones.
The model is composed of three interconnected modules: a perception quality indicator, an alignment quality indicator, and a general quality enhancement module.
Extensive experimentation reveals that AIGIs after G-Refine outperform in 10+ quality metrics across 4 databases.
arXiv Detail & Related papers (2024-04-29T00:54:38Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - A Simple and Effective Baseline for Attentional Generative Adversarial
Networks [8.63558211869045]
A text-to-image model of high-quality images by guiding the generative model through the Text description is an innovative and challenging task.
In recent years, AttnGAN based on the Attention mechanism to guide GAN training has been proposed, SD-GAN, and Stack-GAN++.
We use the popular simple and effective idea (1) to remove redundancy structure and improve the backbone network of AttnGAN.
Our improvements have significantly improved the model size and training efficiency while ensuring that the model's performance is unchanged.
arXiv Detail & Related papers (2023-06-26T13:55:57Z) - TcGAN: Semantic-Aware and Structure-Preserved GANs with Individual
Vision Transformer for Fast Arbitrary One-Shot Image Generation [11.207512995742999]
One-shot image generation (OSG) with generative adversarial networks that learn from the internal patches of a given image has attracted world wide attention.
We propose a novel structure-preserved method TcGAN with individual vision transformer to overcome the shortcomings of the existing one-shot image generation methods.
arXiv Detail & Related papers (2023-02-16T03:05:59Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z) - LT-GAN: Self-Supervised GAN with Latent Transformation Detection [10.405721171353195]
We propose a self-supervised approach (LT-GAN) to improve the generation quality and diversity of images.
We experimentally demonstrate that our proposed LT-GAN can be effectively combined with other state-of-the-art training techniques for added benefits.
arXiv Detail & Related papers (2020-10-19T22:09:45Z) - Efficient texture-aware multi-GAN for image inpainting [5.33024001730262]
Recent GAN-based (Generative adversarial networks) inpainting methods show remarkable improvements.
We propose a multi-GAN architecture improving both the performance and rendering efficiency.
arXiv Detail & Related papers (2020-09-30T14:58:03Z) - DeshuffleGAN: A Self-Supervised GAN to Improve Structure Learning [0.0]
We argue that one of the crucial points to improve the GAN performance is to be able to provide the model with a capability to learn the spatial structure in data.
We introduce a deshuffling task that solves a puzzle of randomly shuffled image tiles, which in turn helps the DeshuffleGAN learn to increase its expressive capacity for spatial structure and realistic appearance.
arXiv Detail & Related papers (2020-06-15T19:06:07Z) - Improving GAN Training with Probability Ratio Clipping and Sample
Reweighting [145.5106274085799]
generative adversarial networks (GANs) often suffer from inferior performance due to unstable training.
We propose a new variational GAN training framework which enjoys superior training stability.
By plugging the training approach in diverse state-of-the-art GAN architectures, we obtain significantly improved performance over a range of tasks.
arXiv Detail & Related papers (2020-06-12T01:39:48Z) - Image Augmentations for GAN Training [57.65145659417266]
We provide insights and guidelines on how to augment images for both vanilla GANs and GANs with regularizations.
Surprisingly, we find that vanilla GANs attain generation quality on par with recent state-of-the-art results.
arXiv Detail & Related papers (2020-06-04T00:16:02Z) - Asymmetric GANs for Image-to-Image Translation [62.49892218126542]
Existing models for Generative Adversarial Networks (GANs) learn the mapping from the source domain to the target domain using a cycle-consistency loss.
We propose an AsymmetricGAN model with both translation and reconstruction generators of unequal sizes and different parameter-sharing strategy.
Experiments on both supervised and unsupervised generative tasks with 8 datasets show that AsymmetricGAN achieves superior model capacity and better generation performance.
arXiv Detail & Related papers (2019-12-14T21:24:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.