Sequential training of GANs against GAN-classifiers reveals correlated
"knowledge gaps" present among independently trained GAN instances
- URL: http://arxiv.org/abs/2303.15533v1
- Date: Mon, 27 Mar 2023 18:18:15 GMT
- Title: Sequential training of GANs against GAN-classifiers reveals correlated
"knowledge gaps" present among independently trained GAN instances
- Authors: Arkanath Pathak, Nicholas Dufour
- Abstract summary: We iteratively train GAN-classifiers and train GANs that "fool" the classifiers.
We examine the effect on GAN training dynamics, output quality, and GAN-classifier generalization.
- Score: 1.104121146441257
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern Generative Adversarial Networks (GANs) generate realistic images
remarkably well. Previous work has demonstrated the feasibility of
"GAN-classifiers" that are distinct from the co-trained discriminator, and
operate on images generated from a frozen GAN. That such classifiers work at
all affirms the existence of "knowledge gaps" (out-of-distribution artifacts
across samples) present in GAN training. We iteratively train GAN-classifiers
and train GANs that "fool" the classifiers (in an attempt to fill the knowledge
gaps), and examine the effect on GAN training dynamics, output quality, and
GAN-classifier generalization. We investigate two settings, a small DCGAN
architecture trained on low dimensional images (MNIST), and StyleGAN2, a SOTA
GAN architecture trained on high dimensional images (FFHQ). We find that the
DCGAN is unable to effectively fool a held-out GAN-classifier without
compromising the output quality. However, StyleGAN2 can fool held-out
classifiers with no change in output quality, and this effect persists over
multiple rounds of GAN/classifier training which appears to reveal an ordering
over optima in the generator parameter space. Finally, we study different
classifier architectures and show that the architecture of the GAN-classifier
has a strong influence on the set of its learned artifacts.
Related papers
- U-KAN Makes Strong Backbone for Medical Image Segmentation and Generation [48.40120035775506]
Kolmogorov-Arnold Networks (KANs) reshape the neural network learning via the stack of non-linear learnable activation functions.
We investigate, modify and re-design the established U-Net pipeline by integrating the dedicated KAN layers on the tokenized intermediate representation, termed U-KAN.
We further delved into the potential of U-KAN as an alternative U-Net noise predictor in diffusion models, demonstrating its applicability in generating task-oriented model architectures.
arXiv Detail & Related papers (2024-06-05T04:13:03Z) - Compressing Image-to-Image Translation GANs Using Local Density
Structures on Their Learned Manifold [69.33930972652594]
Generative Adversarial Networks (GANs) have shown remarkable success in modeling complex data distributions for image-to-image translation.
Existing GAN compression methods mainly rely on knowledge distillation or convolutional classifiers' pruning techniques.
We propose a new approach by explicitly encouraging the pruned model to preserve the density structure of the original parameter-heavy model on its learned manifold.
Our experiments on image translation GAN models, Pix2Pix and CycleGAN, with various benchmark datasets and architectures demonstrate our method's effectiveness.
arXiv Detail & Related papers (2023-12-22T15:43:12Z) - DuDGAN: Improving Class-Conditional GANs via Dual-Diffusion [2.458437232470188]
Class-conditional image generation using generative adversarial networks (GANs) has been investigated through various techniques.
We propose a novel approach for class-conditional image generation using GANs called DuDGAN, which incorporates a dual diffusion-based noise injection process.
Our method outperforms state-of-the-art conditional GAN models for image generation in terms of performance.
arXiv Detail & Related papers (2023-05-24T07:59:44Z) - Distilling Representations from GAN Generator via Squeeze and Span [55.76208869775715]
We propose to distill knowledge from GAN generators by squeezing and spanning their representations.
We span the distilled representation of the synthetic domain to the real domain by also using real training data to remedy the mode collapse of GANs.
arXiv Detail & Related papers (2022-11-06T01:10:28Z) - Information-theoretic stochastic contrastive conditional GAN:
InfoSCC-GAN [6.201770337181472]
We present a contrastive conditional generative adversarial network (Info SCC-GAN) with an explorable latent space.
Info SCC-GAN is derived based on an information-theoretic formulation of mutual information between input data and latent space representation.
Experiments show that Info SCC-GAN outperforms the "vanilla" EigenGAN in the image generation on AFHQ and CelebA datasets.
arXiv Detail & Related papers (2021-12-17T17:56:30Z) - Rebooting ACGAN: Auxiliary Classifier GANs with Stable Training [45.70113212633225]
Conditional Generative Adversarial Networks (cGAN) generate realistic images by incorporating class information into GAN.
One of the most popular cGANs is an auxiliary classifier GAN with softmax cross-entropy loss (ACGAN)
ACGAN also tends to generate easily classifiable samples with a lack of diversity.
arXiv Detail & Related papers (2021-11-01T17:51:33Z) - A Unified View of cGANs with and without Classifiers [24.28407308818025]
Conditional Generative Adversarial Networks (cGANs) are implicit generative models which allow to sample from class-conditional distributions.
Some representative cGANs avoid the shortcoming and reach state-of-the-art performance without having classifiers.
In this work, we demonstrate that classifiers can be properly leveraged to improve cGANs.
arXiv Detail & Related papers (2021-11-01T15:36:33Z) - Guiding GANs: How to control non-conditional pre-trained GANs for
conditional image generation [69.10717733870575]
We present a novel method for guiding generic non-conditional GANs to behave as conditional GANs.
Our approach adds into the mix an encoder network to generate the high-dimensional random input that are fed to the generator network of a non-conditional GAN.
arXiv Detail & Related papers (2021-01-04T14:03:32Z) - Unsupervised Controllable Generation with Self-Training [90.04287577605723]
controllable generation with GANs remains a challenging research problem.
We propose an unsupervised framework to learn a distribution of latent codes that control the generator through self-training.
Our framework exhibits better disentanglement compared to other variants such as the variational autoencoder.
arXiv Detail & Related papers (2020-07-17T21:50:35Z) - Unbiased Auxiliary Classifier GANs with MINE [7.902878869106766]
We propose an Unbiased Auxiliary GANs (UAC-GAN) that utilize the Mutual Information Neural Estorimat (MINE) to estimate the mutual information between the generated data distribution and labels.
Our UAC-GAN performs better than AC-GAN and TACGAN on three datasets.
arXiv Detail & Related papers (2020-06-13T05:51:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.