Freeze the Discriminator: a Simple Baseline for Fine-Tuning GANs
- URL: http://arxiv.org/abs/2002.10964v2
- Date: Fri, 28 Feb 2020 10:53:50 GMT
- Title: Freeze the Discriminator: a Simple Baseline for Fine-Tuning GANs
- Authors: Sangwoo Mo, Minsu Cho, Jinwoo Shin
- Abstract summary: We show that simple fine-tuning of GANs with frozen lower layers of the discriminator performs surprisingly well.
This simple baseline, FreezeD, significantly outperforms previous techniques used in both unconditional and conditional GANs.
- Score: 104.85633684716296
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative adversarial networks (GANs) have shown outstanding performance on
a wide range of problems in computer vision, graphics, and machine learning,
but often require numerous training data and heavy computational resources. To
tackle this issue, several methods introduce a transfer learning technique in
GAN training. They, however, are either prone to overfitting or limited to
learning small distribution shifts. In this paper, we show that simple
fine-tuning of GANs with frozen lower layers of the discriminator performs
surprisingly well. This simple baseline, FreezeD, significantly outperforms
previous techniques used in both unconditional and conditional GANs. We
demonstrate the consistent effect using StyleGAN and SNGAN-projection
architectures on several datasets of Animal Face, Anime Face, Oxford Flower,
CUB-200-2011, and Caltech-256 datasets. The code and results are available at
https://github.com/sangwoomo/FreezeD.
Related papers
- Taming the Tail in Class-Conditional GANs: Knowledge Sharing via Unconditional Training at Lower Resolutions [10.946446480162148]
GANs tend to favor classes with more samples, leading to the generation of low-quality and less diverse samples in tail classes.
We propose a straightforward yet effective method for knowledge sharing, allowing tail classes to borrow from the rich information from classes with more abundant training data.
Experiments on several long-tail benchmarks and GAN architectures demonstrate a significant improvement over existing methods in both the diversity and fidelity of the generated images.
arXiv Detail & Related papers (2024-02-26T23:03:00Z) - GIFD: A Generative Gradient Inversion Method with Feature Domain
Optimization [52.55628139825667]
Federated Learning (FL) has emerged as a promising distributed machine learning framework to preserve clients' privacy.
Recent studies find that an attacker can invert the shared gradients and recover sensitive data against an FL system by leveraging pre-trained generative adversarial networks (GAN) as prior knowledge.
We propose textbfGradient textbfInversion over textbfFeature textbfDomains (GIFD), which disassembles the GAN model and searches the feature domains of the intermediate layers.
arXiv Detail & Related papers (2023-08-09T04:34:21Z) - LD-GAN: Low-Dimensional Generative Adversarial Network for Spectral
Image Generation with Variance Regularization [72.4394510913927]
Deep learning methods are state-of-the-art for spectral image (SI) computational tasks.
GANs enable diverse augmentation by learning and sampling from the data distribution.
GAN-based SI generation is challenging since the high-dimensionality nature of this kind of data hinders the convergence of the GAN training yielding to suboptimal generation.
We propose a statistical regularization to control the low-dimensional representation variance for the autoencoder training and to achieve high diversity of samples generated with the GAN.
arXiv Detail & Related papers (2023-04-29T00:25:02Z) - How to train your draGAN: A task oriented solution to imbalanced
classification [15.893327571516016]
This paper proposes a unique, performance-oriented, data-generating strategy that utilizes a new architecture, coined draGAN.
The samples are generated with the objective of optimizing the classification model's performance, rather than similarity to the real data.
Empirically we show the superiority of draGAN, but also highlight some of its shortcomings.
arXiv Detail & Related papers (2022-11-18T07:37:34Z) - ScoreMix: A Scalable Augmentation Strategy for Training GANs with
Limited Data [93.06336507035486]
Generative Adversarial Networks (GANs) typically suffer from overfitting when limited training data is available.
We present ScoreMix, a novel and scalable data augmentation approach for various image synthesis tasks.
arXiv Detail & Related papers (2022-10-27T02:55:15Z) - Collapse by Conditioning: Training Class-conditional GANs with Limited
Data [109.30895503994687]
We propose a training strategy for conditional GANs (cGANs) that effectively prevents the observed mode-collapse by leveraging unconditional learning.
Our training strategy starts with an unconditional GAN and gradually injects conditional information into the generator and the objective function.
The proposed method for training cGANs with limited data results not only in stable training but also in generating high-quality images.
arXiv Detail & Related papers (2022-01-17T18:59:23Z) - Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then
Training It Toughly [114.81028176850404]
Training generative adversarial networks (GANs) with limited data generally results in deteriorated performance and collapsed models.
We decompose the data-hungry GAN training into two sequential sub-problems.
Such a coordinated framework enables us to focus on lower-complexity and more data-efficient sub-problems.
arXiv Detail & Related papers (2021-02-28T05:20:29Z) - InfoMax-GAN: Improved Adversarial Image Generation via Information
Maximization and Contrastive Learning [39.316605441868944]
Generative Adversarial Networks (GANs) are fundamental to many generative modelling applications.
We propose a principled framework to simultaneously mitigate two fundamental issues in GANs: catastrophic forgetting of the discriminator and mode collapse of the generator.
Our approach significantly stabilizes GAN training and improves GAN performance for image synthesis across five datasets.
arXiv Detail & Related papers (2020-07-09T06:56:11Z) - GAN Compression: Efficient Architectures for Interactive Conditional
GANs [45.012173624111185]
Recent Conditional Generative Adversarial Networks (cGANs) are 1-2 orders of magnitude more compute-intensive than modern recognition CNNs.
We propose a general-purpose compression framework for reducing the inference time and model size of the generator in cGANs.
arXiv Detail & Related papers (2020-03-19T17:59:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.