Self-labeled Conditional GANs
- URL: http://arxiv.org/abs/2012.02162v1
- Date: Thu, 3 Dec 2020 18:46:46 GMT
- Title: Self-labeled Conditional GANs
- Authors: Mehdi Noroozi
- Abstract summary: This paper introduces a novel and fully unsupervised framework for conditional GAN training in which labels are automatically obtained from data.
We incorporate a clustering network into the standard conditional GAN framework that plays against the discriminator.
- Score: 2.9189409618561966
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces a novel and fully unsupervised framework for
conditional GAN training in which labels are automatically obtained from data.
We incorporate a clustering network into the standard conditional GAN framework
that plays against the discriminator. With the generator, it aims to find a
shared structured mapping for associating pseudo-labels with the real and fake
images. Our generator outperforms unconditional GANs in terms of FID with
significant margins on large scale datasets like ImageNet and LSUN. It also
outperforms class conditional GANs trained on human labels on CIFAR10 and
CIFAR100 where fine-grained annotations or a large number of samples per class
are not available. Additionally, our clustering network exceeds the
state-of-the-art on CIFAR100 clustering.
Related papers
- Generalized Category Discovery with Clustering Assignment Consistency [56.92546133591019]
Generalized category discovery (GCD) is a recently proposed open-world task.
We propose a co-training-based framework that encourages clustering consistency.
Our method achieves state-of-the-art performance on three generic benchmarks and three fine-grained visual recognition datasets.
arXiv Detail & Related papers (2023-10-30T00:32:47Z) - Instance Adaptive Prototypical Contrastive Embedding for Generalized
Zero Shot Learning [11.720039414872296]
Generalized zero-shot learning aims to classify samples from seen and unseen labels, assuming unseen labels are not accessible during training.
Recent advancements in GZSL have been expedited by incorporating contrastive-learning-based embedding in generative networks.
arXiv Detail & Related papers (2023-09-13T14:26:03Z) - Federated Generalized Category Discovery [68.35420359523329]
Generalized category discovery (GCD) aims at grouping unlabeled samples from known and unknown classes.
To meet the recent decentralization trend in the community, we introduce a practical yet challenging task, namely Federated GCD (Fed-GCD)
The goal of Fed-GCD is to train a generic GCD model by client collaboration under the privacy-protected constraint.
arXiv Detail & Related papers (2023-05-23T14:27:41Z) - Semi-supervised classification using a supervised autoencoder for
biomedical applications [2.578242050187029]
We create a network architecture that encodes labels into the latent space of an autoencoder.
We classify unlabelled samples using the learned network.
arXiv Detail & Related papers (2022-08-22T13:51:00Z) - S3: Supervised Self-supervised Learning under Label Noise [53.02249460567745]
In this paper we address the problem of classification in the presence of label noise.
In the heart of our method is a sample selection mechanism that relies on the consistency between the annotated label of a sample and the distribution of the labels in its neighborhood in the feature space.
Our method significantly surpasses previous methods on both CIFARCIFAR100 with artificial noise and real-world noisy datasets such as WebVision and ANIMAL-10N.
arXiv Detail & Related papers (2021-11-22T15:49:20Z) - Dual Projection Generative Adversarial Networks for Conditional Image
Generation [26.563829113916942]
We propose a Dual Projection GAN (P2GAN) model that learns to balance between em data matching and em label matching.
We then propose an improved cGAN model with Auxiliary Classification that directly aligns the fake and real conditionals $P(textclass|textimage)$ by minimizing their $f$-divergence.
arXiv Detail & Related papers (2021-08-20T06:10:38Z) - A Unified Generative Adversarial Network Training via Self-Labeling and
Self-Attention [38.31735499785227]
We propose a novel GAN training scheme that can handle any level of labeling in a unified manner.
Our scheme introduces a form of artificial labeling that can incorporate manually defined labels, when available.
We evaluate our approach on CIFAR-10, STL-10 and SVHN, and show that both self-labeling and self-attention consistently improve the quality of generated data.
arXiv Detail & Related papers (2021-06-18T04:40:26Z) - Generative Multi-Label Zero-Shot Learning [136.17594611722285]
Multi-label zero-shot learning strives to classify images into multiple unseen categories for which no data is available during training.
Our work is the first to tackle the problem of multi-label feature in the (generalized) zero-shot setting.
Our cross-level fusion-based generative approach outperforms the state-of-the-art on all three datasets.
arXiv Detail & Related papers (2021-01-27T18:56:46Z) - Guiding GANs: How to control non-conditional pre-trained GANs for
conditional image generation [69.10717733870575]
We present a novel method for guiding generic non-conditional GANs to behave as conditional GANs.
Our approach adds into the mix an encoder network to generate the high-dimensional random input that are fed to the generator network of a non-conditional GAN.
arXiv Detail & Related papers (2021-01-04T14:03:32Z) - Inducing Optimal Attribute Representations for Conditional GANs [61.24506213440997]
Conditional GANs are widely used in translating an image from one category to another.
Existing conditional GANs commonly encode target domain label information as hard-coded categorical vectors in the form of 0s and 1s.
We propose a novel end-to-end learning framework with Graph Convolutional Networks to learn the attribute representations to condition on the generator.
arXiv Detail & Related papers (2020-03-13T20:24:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.