Generative Adversarial Networks
- URL: http://arxiv.org/abs/2203.00667v1
- Date: Tue, 1 Mar 2022 18:37:48 GMT
- Title: Generative Adversarial Networks
- Authors: Gilad Cohen and Raja Giryes
- Abstract summary: Generative Adversarial Networks (GANs) are very popular frameworks for generating high-quality data.
This chapter gives an introduction to GANs, by discussing their principle mechanism and presenting some of their inherent problems during training and evaluation.
- Score: 43.10140199124212
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative Adversarial Networks (GANs) are very popular frameworks for
generating high-quality data, and are immensely used in both the academia and
industry in many domains. Arguably, their most substantial impact has been in
the area of computer vision, where they achieve state-of-the-art image
generation. This chapter gives an introduction to GANs, by discussing their
principle mechanism and presenting some of their inherent problems during
training and evaluation. We focus on these three issues: (1) mode collapse, (2)
vanishing gradients, and (3) generation of low-quality images. We then list
some architecture-variant and loss-variant GANs that remedy the above
challenges. Lastly, we present two utilization examples of GANs for real-world
applications: Data augmentation and face images generation.
Related papers
- LSReGen: Large-Scale Regional Generator via Backward Guidance Framework [12.408195812609042]
controllable image generation remains a challenge.
Current methods, such as training, forward guidance, and backward guidance, have notable limitations.
We propose a novel controllable generation framework that offers a generalized interpretation of backward guidance.
We introduce LSReGen, a large-scale layout-to-image method designed to generate high-quality, layout-compliant images.
arXiv Detail & Related papers (2024-07-21T05:44:46Z) - On Unsupervised Image-to-image translation and GAN stability [0.5523170464803535]
We study some of the failure cases of a seminal work in the field, CycleGAN.
We propose two general models to try to alleviate these problems.
arXiv Detail & Related papers (2023-10-18T04:00:43Z) - PUGAN: Physical Model-Guided Underwater Image Enhancement Using GAN with
Dual-Discriminators [120.06891448820447]
How to obtain clear and visually pleasant images has become a common concern of people.
The task of underwater image enhancement (UIE) has also emerged as the times require.
In this paper, we propose a physical model-guided GAN model for UIE, referred to as PUGAN.
Our PUGAN outperforms state-of-the-art methods in both qualitative and quantitative metrics.
arXiv Detail & Related papers (2023-06-15T07:41:12Z) - A Survey on Leveraging Pre-trained Generative Adversarial Networks for
Image Editing and Restoration [72.17890189820665]
Generative adversarial networks (GANs) have drawn enormous attention due to the simple yet effective training mechanism and superior image generation quality.
Recent GAN models have greatly narrowed the gaps between the generated images and the real ones.
Many recent works show emerging interest to take advantage of pre-trained GAN models by exploiting the well-disentangled latent space and the learned GAN priors.
arXiv Detail & Related papers (2022-07-21T05:05:58Z) - Generative Neural Articulated Radiance Fields [104.9224190002448]
We develop a 3D GAN framework that learns to generate radiance fields of human bodies in a canonical pose and warp them using an explicit deformation field into a desired body pose or facial expression.
We show that our deformation-aware training procedure significantly improves the quality of generated bodies or faces when editing their poses or facial expressions.
arXiv Detail & Related papers (2022-06-28T22:49:42Z) - Generative Adversarial Networks for Image Super-Resolution: A Survey [101.39605080291783]
Single image super-resolution (SISR) has played an important role in the field of image processing.
Recent generative adversarial networks (GANs) can achieve excellent results on low-resolution images with small samples.
In this paper, we conduct a comparative study of GANs from different perspectives.
arXiv Detail & Related papers (2022-04-28T16:35:04Z) - GIU-GANs: Global Information Utilization for Generative Adversarial
Networks [3.3945834638760948]
In this paper, we propose a new GANs called Involution Generative Adversarial Networks (GIU-GANs)
GIU-GANs leverages a brand new module called the Global Information Utilization (GIU) module, which integrates Squeeze-and-Excitation Networks (SENet) and involution.
Batch Normalization(BN) inevitably ignores the representation differences among noise sampled by the generator, and thus degrades the generated image quality.
arXiv Detail & Related papers (2022-01-25T17:17:15Z) - Dual Contrastive Loss and Attention for GANs [82.713118646294]
We propose a novel dual contrastive loss and show that, with this loss, discriminator learns more generalized and distinguishable representations to incentivize generation.
We find attention to be still an important module for successful image generation even though it was not used in the recent state-of-the-art models.
By combining the strengths of these remedies, we improve the compelling state-of-the-art Fr'echet Inception Distance (FID) by at least 17.5% on several benchmark datasets.
arXiv Detail & Related papers (2021-03-31T01:10:26Z) - InfoMax-GAN: Improved Adversarial Image Generation via Information
Maximization and Contrastive Learning [39.316605441868944]
Generative Adversarial Networks (GANs) are fundamental to many generative modelling applications.
We propose a principled framework to simultaneously mitigate two fundamental issues in GANs: catastrophic forgetting of the discriminator and mode collapse of the generator.
Our approach significantly stabilizes GAN training and improves GAN performance for image synthesis across five datasets.
arXiv Detail & Related papers (2020-07-09T06:56:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.