Stylized Projected GAN: A Novel Architecture for Fast and Realistic
Image Generation
- URL: http://arxiv.org/abs/2307.16275v1
- Date: Sun, 30 Jul 2023 17:05:22 GMT
- Title: Stylized Projected GAN: A Novel Architecture for Fast and Realistic
Image Generation
- Authors: Md Nurul Muttakin, Malik Shahid Sultan, Robert Hoehndorf, Hernando
Ombao
- Abstract summary: Projected GANs tackle the training difficulty of GANs by using transfer learning to project the generated and real samples into a pre-trained feature space.
integrated modules are incorporated within the generator architecture of the Fast GAN to mitigate the problem of artifacts in the generated images.
- Score: 8.796424252434875
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Generative Adversarial Networks are used for generating the data using a
generator and a discriminator, GANs usually produce high-quality images, but
training GANs in an adversarial setting is a difficult task. GANs require high
computation power and hyper-parameter regularization for converging. Projected
GANs tackle the training difficulty of GANs by using transfer learning to
project the generated and real samples into a pre-trained feature space.
Projected GANs improve the training time and convergence but produce artifacts
in the generated images which reduce the quality of the generated samples, we
propose an optimized architecture called Stylized Projected GANs which
integrates the mapping network of the Style GANs with Skip Layer Excitation of
Fast GAN. The integrated modules are incorporated within the generator
architecture of the Fast GAN to mitigate the problem of artifacts in the
generated images.
Related papers
- Faster Projected GAN: Towards Faster Few-Shot Image Generation [10.068622488926172]
This paper proposes an improved GAN network model, which is named Faster Projected GAN, based on Projected GAN.
By introducing depth separable convolution (DSC), the number of parameters of the Projected GAN is reduced, the training speed is accelerated, and memory is saved.
arXiv Detail & Related papers (2024-01-23T07:55:27Z) - In-Domain GAN Inversion for Faithful Reconstruction and Editability [132.68255553099834]
We propose in-domain GAN inversion, which consists of a domain-guided domain-regularized and a encoder to regularize the inverted code in the native latent space of the pre-trained GAN model.
We make comprehensive analyses on the effects of the encoder structure, the starting inversion point, as well as the inversion parameter space, and observe the trade-off between the reconstruction quality and the editing property.
arXiv Detail & Related papers (2023-09-25T08:42:06Z) - LD-GAN: Low-Dimensional Generative Adversarial Network for Spectral
Image Generation with Variance Regularization [72.4394510913927]
Deep learning methods are state-of-the-art for spectral image (SI) computational tasks.
GANs enable diverse augmentation by learning and sampling from the data distribution.
GAN-based SI generation is challenging since the high-dimensionality nature of this kind of data hinders the convergence of the GAN training yielding to suboptimal generation.
We propose a statistical regularization to control the low-dimensional representation variance for the autoencoder training and to achieve high diversity of samples generated with the GAN.
arXiv Detail & Related papers (2023-04-29T00:25:02Z) - TcGAN: Semantic-Aware and Structure-Preserved GANs with Individual
Vision Transformer for Fast Arbitrary One-Shot Image Generation [11.207512995742999]
One-shot image generation (OSG) with generative adversarial networks that learn from the internal patches of a given image has attracted world wide attention.
We propose a novel structure-preserved method TcGAN with individual vision transformer to overcome the shortcomings of the existing one-shot image generation methods.
arXiv Detail & Related papers (2023-02-16T03:05:59Z) - GIU-GANs: Global Information Utilization for Generative Adversarial
Networks [3.3945834638760948]
In this paper, we propose a new GANs called Involution Generative Adversarial Networks (GIU-GANs)
GIU-GANs leverages a brand new module called the Global Information Utilization (GIU) module, which integrates Squeeze-and-Excitation Networks (SENet) and involution.
Batch Normalization(BN) inevitably ignores the representation differences among noise sampled by the generator, and thus degrades the generated image quality.
arXiv Detail & Related papers (2022-01-25T17:17:15Z) - Investigating the Potential of Auxiliary-Classifier GANs for Image
Classification in Low Data Regimes [12.128005423388226]
We examine the potential for Auxiliary-Classifier GANs (AC-GANs) as a 'one-stop-shop' architecture for image classification.
AC-GANs show promise in image classification, achieving competitive performance with standard CNNs.
arXiv Detail & Related papers (2022-01-22T19:33:16Z) - InvGAN: Invertible GANs [88.58338626299837]
InvGAN, short for Invertible GAN, successfully embeds real images to the latent space of a high quality generative model.
This allows us to perform image inpainting, merging, and online data augmentation.
arXiv Detail & Related papers (2021-12-08T21:39:00Z) - Dynamically Grown Generative Adversarial Networks [111.43128389995341]
We propose a method to dynamically grow a GAN during training, optimizing the network architecture and its parameters together with automation.
The method embeds architecture search techniques as an interleaving step with gradient-based training to periodically seek the optimal architecture-growing strategy for the generator and discriminator.
arXiv Detail & Related papers (2021-06-16T01:25:51Z) - Guiding GANs: How to control non-conditional pre-trained GANs for
conditional image generation [69.10717733870575]
We present a novel method for guiding generic non-conditional GANs to behave as conditional GANs.
Our approach adds into the mix an encoder network to generate the high-dimensional random input that are fed to the generator network of a non-conditional GAN.
arXiv Detail & Related papers (2021-01-04T14:03:32Z) - Efficient texture-aware multi-GAN for image inpainting [5.33024001730262]
Recent GAN-based (Generative adversarial networks) inpainting methods show remarkable improvements.
We propose a multi-GAN architecture improving both the performance and rendering efficiency.
arXiv Detail & Related papers (2020-09-30T14:58:03Z) - Generative Hierarchical Features from Synthesizing Images [65.66756821069124]
We show that learning to synthesize images can bring remarkable hierarchical visual features that are generalizable across a wide range of applications.
The visual feature produced by our encoder, termed as Generative Hierarchical Feature (GH-Feat), has strong transferability to both generative and discriminative tasks.
arXiv Detail & Related papers (2020-07-20T18:04:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.