Time Efficient Training of Progressive Generative Adversarial Network
using Depthwise Separable Convolution and Super Resolution Generative
Adversarial Network
- URL: http://arxiv.org/abs/2202.12337v1
- Date: Thu, 24 Feb 2022 19:53:37 GMT
- Title: Time Efficient Training of Progressive Generative Adversarial Network
using Depthwise Separable Convolution and Super Resolution Generative
Adversarial Network
- Authors: Atharva Karwande, Pranesh Kulkarni, Tejas Kolhe, Akshay Joshi, Soham
Kamble
- Abstract summary: We propose a novel pipeline that combines Progressive GAN with slight modifications and Super Resolution GAN.
Super Resolution GAN up samples low-resolution images to high-resolution images which can prove to be a useful resource to reduce the training time exponentially.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Generative Adversarial Networks have been employed successfully to generate
high-resolution augmented images of size 1024^2. Although the augmented images
generated are unprecedented, the training time of the model is exceptionally
high. Conventional GAN requires training of both Discriminator as well as the
Generator. In Progressive GAN, which is the current state-of-the-art GAN for
image augmentation, instead of training the GAN all at once, a new concept of
progressing growing of Discriminator and Generator simultaneously, was
proposed. Although the lower stages such as 4x4 and 8x8 train rather quickly,
the later stages consume a tremendous amount of time which could take days to
finish the model training. In our paper, we propose a novel pipeline that
combines Progressive GAN with slight modifications and Super Resolution GAN.
Super Resolution GAN up samples low-resolution images to high-resolution images
which can prove to be a useful resource to reduce the training time
exponentially.
Related papers
- A Wavelet Diffusion GAN for Image Super-Resolution [7.986370916847687]
Diffusion models have emerged as a superior alternative to generative adversarial networks (GANs) for high-fidelity image generation.
However, their real-time feasibility is hindered by slow training and inference speeds.
This study proposes a wavelet-based conditional Diffusion GAN scheme for Single-Image Super-Resolution.
arXiv Detail & Related papers (2024-10-23T15:34:06Z) - Effective Diffusion Transformer Architecture for Image Super-Resolution [63.254644431016345]
We design an effective diffusion transformer for image super-resolution (DiT-SR)
In practice, DiT-SR leverages an overall U-shaped architecture, and adopts a uniform isotropic design for all the transformer blocks.
We analyze the limitation of the widely used AdaLN, and present a frequency-adaptive time-step conditioning module.
arXiv Detail & Related papers (2024-09-29T07:14:16Z) - E$^{2}$GAN: Efficient Training of Efficient GANs for Image-to-Image Translation [69.72194342962615]
We introduce and address a novel research direction: can the process of distilling GANs from diffusion models be made significantly more efficient?
First, we construct a base GAN model with generalized features, adaptable to different concepts through fine-tuning, eliminating the need for training from scratch.
Second, we identify crucial layers within the base GAN model and employ Low-Rank Adaptation (LoRA) with a simple yet effective rank search process, rather than fine-tuning the entire base model.
Third, we investigate the minimal amount of data necessary for fine-tuning, further reducing the overall training time.
arXiv Detail & Related papers (2024-01-11T18:59:14Z) - ScaleCrafter: Tuning-free Higher-Resolution Visual Generation with
Diffusion Models [126.35334860896373]
We investigate the capability of generating images from pre-trained diffusion models at much higher resolutions than the training image sizes.
Existing works for higher-resolution generation, such as attention-based and joint-diffusion approaches, cannot well address these issues.
We propose a simple yet effective re-dilation that can dynamically adjust the convolutional perception field during inference.
arXiv Detail & Related papers (2023-10-11T17:52:39Z) - A Survey on Leveraging Pre-trained Generative Adversarial Networks for
Image Editing and Restoration [72.17890189820665]
Generative adversarial networks (GANs) have drawn enormous attention due to the simple yet effective training mechanism and superior image generation quality.
Recent GAN models have greatly narrowed the gaps between the generated images and the real ones.
Many recent works show emerging interest to take advantage of pre-trained GAN models by exploiting the well-disentangled latent space and the learned GAN priors.
arXiv Detail & Related papers (2022-07-21T05:05:58Z) - Projected GANs Converge Faster [50.23237734403834]
Generative Adversarial Networks (GANs) produce high-quality images but are challenging to train.
We make significant headway on these issues by projecting generated and real samples into a fixed, pretrained feature space.
Our Projected GAN improves image quality, sample efficiency, and convergence speed.
arXiv Detail & Related papers (2021-11-01T15:11:01Z) - Generative Adversarial Stacked Autoencoders [3.1829446824051195]
We propose a Generative Adversarial Stacked Convolutional Autoencoder(GASCA) model and a generative adversarial gradual greedy layer-wise learning algorithm de-signed to train Adversarial Autoencoders.
Our training approach produces images with significantly lower reconstruction error than vanilla joint training.
arXiv Detail & Related papers (2020-11-22T17:51:59Z) - TinyGAN: Distilling BigGAN for Conditional Image Generation [2.8072597424460466]
BigGAN has significantly improved the quality of image generation on ImageNet, but it requires a huge model, making it hard to deploy on resource-constrained devices.
We propose a black-box knowledge distillation framework for compressing GANs, which highlights a stable and efficient training process.
Given BigGAN as the teacher network, we manage to train a much smaller student network to mimic its functionality, achieving competitive performance on Inception and FID scores with the generator having $16times$ fewer parameters.
arXiv Detail & Related papers (2020-09-29T07:33:49Z) - Improving the Speed and Quality of GAN by Adversarial Training [87.70013107142142]
We develop FastGAN to improve the speed and quality of GAN training based on the adversarial training technique.
Our training algorithm brings ImageNet training to the broader public by requiring 2-4 GPUs.
arXiv Detail & Related papers (2020-08-07T20:21:31Z) - Autoencoding Generative Adversarial Networks [0.0]
I propose a four-network model which learns a mapping between a specified latent space and a given sample space.
The AEGAN technique offers several improvements to typical GAN training, including training stabilization, mode-collapse prevention, and permitting the directversa between real samples.
arXiv Detail & Related papers (2020-04-11T19:51:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.