TinyGAN: Distilling BigGAN for Conditional Image Generation
- URL: http://arxiv.org/abs/2009.13829v1
- Date: Tue, 29 Sep 2020 07:33:49 GMT
- Title: TinyGAN: Distilling BigGAN for Conditional Image Generation
- Authors: Ting-Yun Chang and Chi-Jen Lu
- Abstract summary: BigGAN has significantly improved the quality of image generation on ImageNet, but it requires a huge model, making it hard to deploy on resource-constrained devices.
We propose a black-box knowledge distillation framework for compressing GANs, which highlights a stable and efficient training process.
Given BigGAN as the teacher network, we manage to train a much smaller student network to mimic its functionality, achieving competitive performance on Inception and FID scores with the generator having $16times$ fewer parameters.
- Score: 2.8072597424460466
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative Adversarial Networks (GANs) have become a powerful approach for
generative image modeling. However, GANs are notorious for their training
instability, especially on large-scale, complex datasets. While the recent work
of BigGAN has significantly improved the quality of image generation on
ImageNet, it requires a huge model, making it hard to deploy on
resource-constrained devices. To reduce the model size, we propose a black-box
knowledge distillation framework for compressing GANs, which highlights a
stable and efficient training process. Given BigGAN as the teacher network, we
manage to train a much smaller student network to mimic its functionality,
achieving competitive performance on Inception and FID scores with the
generator having $16\times$ fewer parameters.
Related papers
- E$^{2}$GAN: Efficient Training of Efficient GANs for Image-to-Image Translation [69.72194342962615]
We introduce and address a novel research direction: can the process of distilling GANs from diffusion models be made significantly more efficient?
First, we construct a base GAN model with generalized features, adaptable to different concepts through fine-tuning, eliminating the need for training from scratch.
Second, we identify crucial layers within the base GAN model and employ Low-Rank Adaptation (LoRA) with a simple yet effective rank search process, rather than fine-tuning the entire base model.
Third, we investigate the minimal amount of data necessary for fine-tuning, further reducing the overall training time.
arXiv Detail & Related papers (2024-01-11T18:59:14Z) - A Survey on Leveraging Pre-trained Generative Adversarial Networks for
Image Editing and Restoration [72.17890189820665]
Generative adversarial networks (GANs) have drawn enormous attention due to the simple yet effective training mechanism and superior image generation quality.
Recent GAN models have greatly narrowed the gaps between the generated images and the real ones.
Many recent works show emerging interest to take advantage of pre-trained GAN models by exploiting the well-disentangled latent space and the learned GAN priors.
arXiv Detail & Related papers (2022-07-21T05:05:58Z) - Time Efficient Training of Progressive Generative Adversarial Network
using Depthwise Separable Convolution and Super Resolution Generative
Adversarial Network [0.0]
We propose a novel pipeline that combines Progressive GAN with slight modifications and Super Resolution GAN.
Super Resolution GAN up samples low-resolution images to high-resolution images which can prove to be a useful resource to reduce the training time exponentially.
arXiv Detail & Related papers (2022-02-24T19:53:37Z) - DGL-GAN: Discriminator Guided Learning for GAN Compression [57.6150859067392]
Generative Adversarial Networks (GANs) with high computation costs have achieved remarkable results in synthesizing high-resolution images from random noise.
We propose a novel yet simple bf Discriminator bf Guided bf Learning approach for compressing vanilla bf GAN, dubbed bf DGL-GAN.
arXiv Detail & Related papers (2021-12-13T09:24:45Z) - InvGAN: Invertible GANs [88.58338626299837]
InvGAN, short for Invertible GAN, successfully embeds real images to the latent space of a high quality generative model.
This allows us to perform image inpainting, merging, and online data augmentation.
arXiv Detail & Related papers (2021-12-08T21:39:00Z) - Online Multi-Granularity Distillation for GAN Compression [17.114017187236836]
Generative Adversarial Networks (GANs) have witnessed prevailing success in yielding outstanding images.
GANs are burdensome to deploy on resource-constrained devices due to ponderous computational costs and hulking memory usage.
We propose a novel online multi-granularity distillation scheme to obtain lightweight GANs.
arXiv Detail & Related papers (2021-08-16T05:49:50Z) - SETGAN: Scale and Energy Trade-off GANs for Image Applications on Mobile
Platforms [15.992829133103921]
We propose a novel approach to trade-off image generation accuracy of a GAN for the energy consumed at run-time called Scale-Energy Tradeoff GAN (SETGAN)
We use SinGAN, a single image unconditional generative model, that contains a pyramid of fully convolutional GANs.
With SETGAN's unique client-server-based architecture, we were able to achieve a 56% gain in energy for a loss of 3% to 12% SSIM accuracy.
arXiv Detail & Related papers (2021-03-23T23:51:22Z) - Towards Faster and Stabilized GAN Training for High-fidelity Few-shot
Image Synthesis [21.40315235087551]
We propose a light-weight GAN structure that gains superior quality on 1024*1024 resolution.
We show our model's superior performance compared to the state-of-the-art StyleGAN2, when data and computing budget are limited.
arXiv Detail & Related papers (2021-01-12T22:02:54Z) - Hijack-GAN: Unintended-Use of Pretrained, Black-Box GANs [57.90008929377144]
We show that state-of-the-art GAN models can be used for a range of applications beyond unconditional image generation.
We achieve this by an iterative scheme that also allows gaining control over the image generation process.
arXiv Detail & Related papers (2020-11-28T11:07:36Z) - Enhanced Balancing GAN: Minority-class Image Generation [0.7310043452300734]
Generative adversarial networks (GANs) are one of the most powerful generative models.
Balancing GAN (BAGAN) is proposed to mitigate this problem, but it is unstable when images in different classes look similar.
In this work, we propose a supervised autoencoder with an intermediate embedding model to disperse the labeled latent vectors.
Our proposed model overcomes the unstable issue in original BAGAN and converges faster to high quality generations.
arXiv Detail & Related papers (2020-10-31T05:03:47Z) - Distilling portable Generative Adversarial Networks for Image
Translation [101.33731583985902]
Traditional network compression methods focus on visually recognition tasks, but never deal with generation tasks.
Inspired by knowledge distillation, a student generator of fewer parameters is trained by inheriting the low-level and high-level information from the original heavy teacher generator.
An adversarial learning process is established to optimize student generator and student discriminator.
arXiv Detail & Related papers (2020-03-07T05:53:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.