A Unified View of cGANs with and without Classifiers
- URL: http://arxiv.org/abs/2111.01035v1
- Date: Mon, 1 Nov 2021 15:36:33 GMT
- Title: A Unified View of cGANs with and without Classifiers
- Authors: Si-An Chen, Chun-Liang Li, Hsuan-Tien Lin
- Abstract summary: Conditional Generative Adversarial Networks (cGANs) are implicit generative models which allow to sample from class-conditional distributions.
Some representative cGANs avoid the shortcoming and reach state-of-the-art performance without having classifiers.
In this work, we demonstrate that classifiers can be properly leveraged to improve cGANs.
- Score: 24.28407308818025
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conditional Generative Adversarial Networks (cGANs) are implicit generative
models which allow to sample from class-conditional distributions. Existing
cGANs are based on a wide range of different discriminator designs and training
objectives. One popular design in earlier works is to include a classifier
during training with the assumption that good classifiers can help eliminate
samples generated with wrong classes. Nevertheless, including classifiers in
cGANs often comes with a side effect of only generating easy-to-classify
samples. Recently, some representative cGANs avoid the shortcoming and reach
state-of-the-art performance without having classifiers. Somehow it remains
unanswered whether the classifiers can be resurrected to design better cGANs.
In this work, we demonstrate that classifiers can be properly leveraged to
improve cGANs. We start by using the decomposition of the joint probability
distribution to connect the goals of cGANs and classification as a unified
framework. The framework, along with a classic energy model to parameterize
distributions, justifies the use of classifiers for cGANs in a principled
manner. It explains several popular cGAN variants, such as ACGAN, ProjGAN, and
ContraGAN, as special cases with different levels of approximations, which
provides a unified view and brings new insights to understanding cGANs.
Experimental results demonstrate that the design inspired by the proposed
framework outperforms state-of-the-art cGANs on multiple benchmark datasets,
especially on the most challenging ImageNet. The code is available at
https://github.com/sian-chen/PyTorch-ECGAN.
Related papers
- Sequential training of GANs against GAN-classifiers reveals correlated
"knowledge gaps" present among independently trained GAN instances [1.104121146441257]
We iteratively train GAN-classifiers and train GANs that "fool" the classifiers.
We examine the effect on GAN training dynamics, output quality, and GAN-classifier generalization.
arXiv Detail & Related papers (2023-03-27T18:18:15Z) - Parametric Classification for Generalized Category Discovery: A Baseline
Study [70.73212959385387]
Generalized Category Discovery (GCD) aims to discover novel categories in unlabelled datasets using knowledge learned from labelled samples.
We investigate the failure of parametric classifiers, verify the effectiveness of previous design choices when high-quality supervision is available, and identify unreliable pseudo-labels as a key problem.
We propose a simple yet effective parametric classification method that benefits from entropy regularisation, achieves state-of-the-art performance on multiple GCD benchmarks and shows strong robustness to unknown class numbers.
arXiv Detail & Related papers (2022-11-21T18:47:11Z) - UQGAN: A Unified Model for Uncertainty Quantification of Deep
Classifiers trained via Conditional GANs [9.496524884855559]
We present an approach to quantifying uncertainty for deep neural networks in image classification, based on generative adversarial networks (GANs)
Instead of shielding the entire in-distribution data with GAN generated OoD examples, we shield each class separately with out-of-class examples generated by a conditional GAN.
In particular, we improve over the OoD detection and FP detection performance of state-of-the-art GAN-training based classifiers.
arXiv Detail & Related papers (2022-01-31T14:42:35Z) - Self-Ensembling GAN for Cross-Domain Semantic Segmentation [107.27377745720243]
This paper proposes a self-ensembling generative adversarial network (SE-GAN) exploiting cross-domain data for semantic segmentation.
In SE-GAN, a teacher network and a student network constitute a self-ensembling model for generating semantic segmentation maps, which together with a discriminator, forms a GAN.
Despite its simplicity, we find SE-GAN can significantly boost the performance of adversarial training and enhance the stability of the model.
arXiv Detail & Related papers (2021-12-15T09:50:25Z) - Improving Model Compatibility of Generative Adversarial Networks by
Boundary Calibration [24.28407308818025]
Boundary-Calibration GANs (BCGANs) are proposed to improve GAN's model compatibility.
BCGANs generate realistic images like original GANs but also achieves superior model compatibility than the original GANs.
arXiv Detail & Related papers (2021-11-03T16:08:09Z) - cGANs with Auxiliary Discriminative Classifier [43.78253518292111]
Conditional generative models aim to learn the underlying joint distribution of data and labels.
auxiliary classifier generative adversarial networks (AC-GAN) have been widely used, but suffer from the issue of low intra-class diversity on generated samples.
We propose novel cGANs with auxiliary discriminative classifier (ADC-GAN) to address the issue of AC-GAN.
arXiv Detail & Related papers (2021-07-21T13:06:32Z) - Generalized Zero-Shot Learning via VAE-Conditioned Generative Flow [83.27681781274406]
Generalized zero-shot learning aims to recognize both seen and unseen classes by transferring knowledge from semantic descriptions to visual representations.
Recent generative methods formulate GZSL as a missing data problem, which mainly adopts GANs or VAEs to generate visual features for unseen classes.
We propose a conditional version of generative flows for GZSL, i.e., VAE-Conditioned Generative Flow (VAE-cFlow)
arXiv Detail & Related papers (2020-09-01T09:12:31Z) - Improving Generative Adversarial Networks with Local Coordinate Coding [150.24880482480455]
Generative adversarial networks (GANs) have shown remarkable success in generating realistic data from some predefined prior distribution.
In practice, semantic information might be represented by some latent distribution learned from data.
We propose an LCCGAN model with local coordinate coding (LCC) to improve the performance of generating data.
arXiv Detail & Related papers (2020-07-28T09:17:50Z) - Unsupervised Controllable Generation with Self-Training [90.04287577605723]
controllable generation with GANs remains a challenging research problem.
We propose an unsupervised framework to learn a distribution of latent codes that control the generator through self-training.
Our framework exhibits better disentanglement compared to other variants such as the variational autoencoder.
arXiv Detail & Related papers (2020-07-17T21:50:35Z) - Unbiased Auxiliary Classifier GANs with MINE [7.902878869106766]
We propose an Unbiased Auxiliary GANs (UAC-GAN) that utilize the Mutual Information Neural Estorimat (MINE) to estimate the mutual information between the generated data distribution and labels.
Our UAC-GAN performs better than AC-GAN and TACGAN on three datasets.
arXiv Detail & Related papers (2020-06-13T05:51:51Z) - Embedding Graph Auto-Encoder for Graph Clustering [90.8576971748142]
Graph auto-encoder (GAE) models are based on semi-supervised graph convolution networks (GCN)
We design a specific GAE-based model for graph clustering to be consistent with the theory, namely Embedding Graph Auto-Encoder (EGAE)
EGAE consists of one encoder and dual decoders.
arXiv Detail & Related papers (2020-02-20T09:53:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.