MatchGAN: A Self-Supervised Semi-Supervised Conditional Generative
Adversarial Network
- URL: http://arxiv.org/abs/2006.06614v2
- Date: Thu, 8 Oct 2020 18:57:44 GMT
- Title: MatchGAN: A Self-Supervised Semi-Supervised Conditional Generative
Adversarial Network
- Authors: Jiaze Sun, Binod Bhattarai, Tae-Kyun Kim
- Abstract summary: We present a novel self-supervised learning approach for conditional generative adversarial networks (GANs) under a semi-supervised setting.
We perform augmentation by randomly sampling sensible labels from the label space of the few labelled examples available.
Our method surpasses the baseline with only 20% of the labelled examples used to train the baseline.
- Score: 51.84251358009803
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel self-supervised learning approach for conditional
generative adversarial networks (GANs) under a semi-supervised setting. Unlike
prior self-supervised approaches which often involve geometric augmentations on
the image space such as predicting rotation angles, our pretext task leverages
the label space. We perform augmentation by randomly sampling sensible labels
from the label space of the few labelled examples available and assigning them
as target labels to the abundant unlabelled examples from the same distribution
as that of the labelled ones. The images are then translated and grouped into
positive and negative pairs by their target labels, acting as training examples
for our pretext task which involves optimising an auxiliary match loss on the
discriminator's side. We tested our method on two challenging benchmarks,
CelebA and RaFD, and evaluated the results using standard metrics including
Fr\'{e}chet Inception Distance, Inception Score, and Attribute Classification
Rate. Extensive empirical evaluation demonstrates the effectiveness of our
proposed method over competitive baselines and existing arts. In particular,
our method surpasses the baseline with only 20% of the labelled examples used
to train the baseline.
Related papers
- Theory-inspired Label Shift Adaptation via Aligned Distribution Mixture [21.494268411607766]
We propose an innovative label shift framework named as Aligned Distribution Mixture (ADM)
Within this framework, we enhance four typical label shift methods by introducing modifications to the classifier training process.
Considering the distinctiveness of the proposed one-step approach, we develop an efficient bi-level optimization strategy.
arXiv Detail & Related papers (2024-11-04T12:51:57Z) - Adversarial Semi-Supervised Domain Adaptation for Semantic Segmentation:
A New Role for Labeled Target Samples [7.199108088621308]
We design new training objective losses for cases when labeled target data behave as source samples or as real target samples.
To support our approach, we consider a complementary method that mixes source and labeled target data, then applies the same adaptation process.
We illustrate our findings through extensive experiments on the benchmarks GTA5, SYNTHIA, and Cityscapes.
arXiv Detail & Related papers (2023-12-12T15:40:22Z) - Class-Distribution-Aware Pseudo Labeling for Semi-Supervised Multi-Label
Learning [97.88458953075205]
Pseudo-labeling has emerged as a popular and effective approach for utilizing unlabeled data.
This paper proposes a novel solution called Class-Aware Pseudo-Labeling (CAP) that performs pseudo-labeling in a class-aware manner.
arXiv Detail & Related papers (2023-05-04T12:52:18Z) - Exploiting Completeness and Uncertainty of Pseudo Labels for Weakly
Supervised Video Anomaly Detection [149.23913018423022]
Weakly supervised video anomaly detection aims to identify abnormal events in videos using only video-level labels.
Two-stage self-training methods have achieved significant improvements by self-generating pseudo labels.
We propose an enhancement framework by exploiting completeness and uncertainty properties for effective self-training.
arXiv Detail & Related papers (2022-12-08T05:53:53Z) - Disentangling Sampling and Labeling Bias for Learning in Large-Output
Spaces [64.23172847182109]
We show that different negative sampling schemes implicitly trade-off performance on dominant versus rare labels.
We provide a unified means to explicitly tackle both sampling bias, arising from working with a subset of all labels, and labeling bias, which is inherent to the data due to label imbalance.
arXiv Detail & Related papers (2021-05-12T15:40:13Z) - Debiased Contrastive Learning [64.98602526764599]
We develop a debiased contrastive objective that corrects for the sampling of same-label datapoints.
Empirically, the proposed objective consistently outperforms the state-of-the-art for representation learning in vision, language, and reinforcement learning benchmarks.
arXiv Detail & Related papers (2020-07-01T04:25:24Z) - A Sample Selection Approach for Universal Domain Adaptation [94.80212602202518]
We study the problem of unsupervised domain adaption in the universal scenario.
Only some of the classes are shared between the source and target domains.
We present a scoring scheme that is effective in identifying the samples of the shared classes.
arXiv Detail & Related papers (2020-01-14T22:28:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.