Consistency Regularization with Generative Adversarial Networks for
Semi-Supervised Learning
- URL: http://arxiv.org/abs/2007.03844v2
- Date: Tue, 15 Sep 2020 08:23:06 GMT
- Title: Consistency Regularization with Generative Adversarial Networks for
Semi-Supervised Learning
- Authors: Zexi Chen, Bharathkumar Ramachandra, Ranga Raju Vatsavai
- Abstract summary: Adversarial Adversarial Networks (GANs) based semi-supervised learning (SSL) approaches are shown to improve classification performance by utilizing a large number of unlabeled samples.
However, their performance still lags behind the state-of-the-art non-GAN based SSL approaches.
We identify that the main reason for this is the lack of consistency in class probability predictions on the same image under local perturbations.
- Score: 2.9707483702447783
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative Adversarial Networks (GANs) based semi-supervised learning (SSL)
approaches are shown to improve classification performance by utilizing a large
number of unlabeled samples in conjunction with limited labeled samples.
However, their performance still lags behind the state-of-the-art non-GAN based
SSL approaches. We identify that the main reason for this is the lack of
consistency in class probability predictions on the same image under local
perturbations. Following the general literature, we address this issue via
label consistency regularization, which enforces the class probability
predictions for an input image to be unchanged under various
semantic-preserving perturbations. In this work, we introduce consistency
regularization into the vanilla semi-GAN to address this critical limitation.
In particular, we present a new composite consistency regularization method
which, in spirit, leverages both local consistency and interpolation
consistency. We demonstrate the efficacy of our approach on two SSL image
classification benchmark datasets, SVHN and CIFAR-10. Our experiments show that
this new composite consistency regularization based semi-GAN significantly
improves its performance and achieves new state-of-the-art performance among
GAN-based SSL approaches.
Related papers
- ItTakesTwo: Leveraging Peer Representations for Semi-supervised LiDAR Semantic Segmentation [24.743048965822297]
This paper introduces a novel semi-supervised LiDAR semantic segmentation framework called ItTakesTwo (IT2)
IT2 is designed to ensure consistent predictions from peer LiDAR representations, thereby improving the perturbation effectiveness in consistency learning.
Results on public benchmarks show that our approach achieves remarkable improvements over the previous state-of-the-art (SOTA) methods in the field.
arXiv Detail & Related papers (2024-07-09T18:26:53Z) - AstMatch: Adversarial Self-training Consistency Framework for Semi-Supervised Medical Image Segmentation [19.80612796391153]
Semi-supervised learning (SSL) has shown considerable potential in medical image segmentation.
In this work, we propose an adversarial self-training consistency framework (AstMatch)
The proposed AstMatch has been extensively evaluated with cutting-edge SSL methods on three public-available datasets.
arXiv Detail & Related papers (2024-06-28T04:38:12Z) - A Channel-ensemble Approach: Unbiased and Low-variance Pseudo-labels is Critical for Semi-supervised Classification [61.473485511491795]
Semi-supervised learning (SSL) is a practical challenge in computer vision.
Pseudo-label (PL) methods, e.g., FixMatch and FreeMatch, obtain the State Of The Art (SOTA) performances in SSL.
We propose a lightweight channel-based ensemble method to consolidate multiple inferior PLs into the theoretically guaranteed unbiased and low-variance one.
arXiv Detail & Related papers (2024-03-27T09:49:37Z) - Improving Representation Learning for Histopathologic Images with
Cluster Constraints [31.426157660880673]
Self-supervised learning (SSL) pretraining strategies are emerging as a viable alternative.
We introduce an SSL framework for transferable representation learning and semantically meaningful clustering.
Our approach outperforms common SSL methods in downstream classification and clustering tasks.
arXiv Detail & Related papers (2023-10-18T21:20:44Z) - Instance Adaptive Prototypical Contrastive Embedding for Generalized
Zero Shot Learning [11.720039414872296]
Generalized zero-shot learning aims to classify samples from seen and unseen labels, assuming unseen labels are not accessible during training.
Recent advancements in GZSL have been expedited by incorporating contrastive-learning-based embedding in generative networks.
arXiv Detail & Related papers (2023-09-13T14:26:03Z) - Revisiting Deep Semi-supervised Learning: An Empirical Distribution
Alignment Framework and Its Generalization Bound [97.93945601881407]
We propose a new deep semi-supervised learning framework called Semi-supervised Learning by Empirical Distribution Alignment (SLEDA)
We show the generalization error of semi-supervised learning can be effectively bounded by minimizing the training error on labeled data.
Building upon our new framework and the theoretical bound, we develop a simple and effective deep semi-supervised learning method called Augmented Distribution Alignment Network (ADA-Net)
arXiv Detail & Related papers (2022-03-13T11:59:52Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - Revisiting Consistency Regularization for Semi-Supervised Learning [80.28461584135967]
We propose an improved consistency regularization framework by a simple yet effective technique, FeatDistLoss.
Experimental results show that our model defines a new state of the art for various datasets and settings.
arXiv Detail & Related papers (2021-12-10T20:46:13Z) - On Data-Augmentation and Consistency-Based Semi-Supervised Learning [77.57285768500225]
Recently proposed consistency-based Semi-Supervised Learning (SSL) methods have advanced the state of the art in several SSL tasks.
Despite these advances, the understanding of these methods is still relatively limited.
arXiv Detail & Related papers (2021-01-18T10:12:31Z) - Revisiting LSTM Networks for Semi-Supervised Text Classification via
Mixed Objective Function [106.69643619725652]
We develop a training strategy that allows even a simple BiLSTM model, when trained with cross-entropy loss, to achieve competitive results.
We report state-of-the-art results for text classification task on several benchmark datasets.
arXiv Detail & Related papers (2020-09-08T21:55:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.