OSSGAN: Open-Set Semi-Supervised Image Generation
- URL: http://arxiv.org/abs/2204.14249v1
- Date: Fri, 29 Apr 2022 17:26:09 GMT
- Title: OSSGAN: Open-Set Semi-Supervised Image Generation
- Authors: Kai Katsumata and Duc Minh Vo and Hideki Nakayama
- Abstract summary: We introduce a challenging training scheme of conditional GANs, called open-set semi-supervised image generation.
OSSGAN provides decision clues to the discriminator on the basis of whether an unlabeled image belongs to one or none of the classes of interest.
The results of experiments on Tiny ImageNet and ImageNet show notable improvements over supervised BigGAN and semi-supervised methods.
- Score: 26.67298827670573
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a challenging training scheme of conditional GANs, called
open-set semi-supervised image generation, where the training dataset consists
of two parts: (i) labeled data and (ii) unlabeled data with samples belonging
to one of the labeled data classes, namely, a closed-set, and samples not
belonging to any of the labeled data classes, namely, an open-set. Unlike the
existing semi-supervised image generation task, where unlabeled data only
contain closed-set samples, our task is more general and lowers the data
collection cost in practice by allowing open-set samples to appear. Thanks to
entropy regularization, the classifier that is trained on labeled data is able
to quantify sample-wise importance to the training of cGAN as confidence,
allowing us to use all samples in unlabeled data. We design OSSGAN, which
provides decision clues to the discriminator on the basis of whether an
unlabeled image belongs to one or none of the classes of interest, smoothly
integrating labeled and unlabeled data during training. The results of
experiments on Tiny ImageNet and ImageNet show notable improvements over
supervised BigGAN and semi-supervised methods. Our code is available at
https://github.com/raven38/OSSGAN.
Related papers
- Generalized Category Discovery with Clustering Assignment Consistency [56.92546133591019]
Generalized category discovery (GCD) is a recently proposed open-world task.
We propose a co-training-based framework that encourages clustering consistency.
Our method achieves state-of-the-art performance on three generic benchmarks and three fine-grained visual recognition datasets.
arXiv Detail & Related papers (2023-10-30T00:32:47Z) - Manifold DivideMix: A Semi-Supervised Contrastive Learning Framework for
Severe Label Noise [4.90148689564172]
Real-world datasets contain noisy label samples that have no semantic relevance to any class in the dataset.
Most state-of-the-art methods leverage ID labeled noisy samples as unlabeled data for semi-supervised learning.
We propose incorporating the information from all the training data by leveraging the benefits of self-supervised training.
arXiv Detail & Related papers (2023-08-13T23:33:33Z) - Soft Curriculum for Learning Conditional GANs with Noisy-Labeled and
Uncurated Unlabeled Data [70.25049762295193]
We introduce a novel conditional image generation framework that accepts noisy-labeled and uncurated data during training.
We propose soft curriculum learning, which assigns instance-wise weights for adversarial training while assigning new labels for unlabeled data.
Our experiments show that our approach outperforms existing semi-supervised and label-noise robust methods in terms of both quantitative and qualitative performance.
arXiv Detail & Related papers (2023-07-17T08:31:59Z) - Learning Semi-supervised Gaussian Mixture Models for Generalized
Category Discovery [36.01459228175808]
We propose an EM-like framework that alternates between representation learning and class number estimation.
We evaluate our framework on both generic image classification datasets and challenging fine-grained object recognition datasets.
arXiv Detail & Related papers (2023-05-10T13:47:38Z) - Semi-Supervised Image Captioning by Adversarially Propagating Labeled
Data [95.0476489266988]
We present a novel data-efficient semi-supervised framework to improve the generalization of image captioning models.
Our proposed method trains a captioner to learn from a paired data and to progressively associate unpaired data.
Our extensive and comprehensive empirical results both on (1) image-based and (2) dense region-based captioning datasets followed by comprehensive analysis on the scarcely-paired dataset.
arXiv Detail & Related papers (2023-01-26T15:25:43Z) - GuidedMix-Net: Semi-supervised Semantic Segmentation by Using Labeled
Images as Reference [90.5402652758316]
We propose a novel method for semi-supervised semantic segmentation named GuidedMix-Net.
It uses labeled information to guide the learning of unlabeled instances.
It achieves competitive segmentation accuracy and significantly improves the mIoU by +7$%$ compared to previous approaches.
arXiv Detail & Related papers (2021-12-28T06:48:03Z) - OpenCoS: Contrastive Semi-supervised Learning for Handling Open-set
Unlabeled Data [65.19205979542305]
Unlabeled data may include out-of-class samples in practice.
OpenCoS is a method for handling this realistic semi-supervised learning scenario.
arXiv Detail & Related papers (2021-06-29T06:10:05Z) - GuidedMix-Net: Learning to Improve Pseudo Masks Using Labeled Images as
Reference [153.354332374204]
We propose a novel method for semi-supervised semantic segmentation named GuidedMix-Net.
We first introduce a feature alignment objective between labeled and unlabeled data to capture potentially similar image pairs.
MITrans is shown to be a powerful knowledge module for further progressive refining features of unlabeled data.
Along with supervised learning for labeled data, the prediction of unlabeled data is jointly learned with the generated pseudo masks.
arXiv Detail & Related papers (2021-06-29T02:48:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.