Class-Aware Contrastive Semi-Supervised Learning
- URL: http://arxiv.org/abs/2203.02261v1
- Date: Fri, 4 Mar 2022 12:18:23 GMT
- Title: Class-Aware Contrastive Semi-Supervised Learning
- Authors: Fan Yang, Kai Wu, Shuyi Zhang, Guannan Jiang, Yong Liu, Feng Zheng,
Wei Zhang, Chengjie Wang, Long Zeng
- Abstract summary: We propose a general method named Class-aware Contrastive Semi-Supervised Learning (CCSSL) to improve pseudo-label quality and enhance the model's robustness in the real-world setting.
Our proposed CCSSL has significant performance improvements over the state-of-the-art SSL methods on the standard datasets CIFAR100 and STL10.
- Score: 51.205844705156046
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pseudo-label-based semi-supervised learning (SSL) has achieved great success
on raw data utilization. However, its training procedure suffers from
confirmation bias due to the noise contained in self-generated artificial
labels. Moreover, the model's judgment becomes noisier in real-world
applications with extensive out-of-distribution data. To address this issue, we
propose a general method named Class-aware Contrastive Semi-Supervised Learning
(CCSSL), which is a drop-in helper to improve the pseudo-label quality and
enhance the model's robustness in the real-world setting. Rather than treating
real-world data as a union set, our method separately handles reliable
in-distribution data with class-wise clustering for blending into downstream
tasks and noisy out-of-distribution data with image-wise contrastive for better
generalization. Furthermore, by applying target re-weighting, we successfully
emphasize clean label learning and simultaneously reduce noisy label learning.
Despite its simplicity, our proposed CCSSL has significant performance
improvements over the state-of-the-art SSL methods on the standard datasets
CIFAR100 and STL10. On the real-world dataset Semi-iNat 2021, we improve
FixMatch by 9.80% and CoMatch by 3.18%.
Related papers
- Uncertainty Aware Learning for Language Model Alignment [97.36361196793929]
We propose uncertainty-aware learning (UAL) to improve the model alignment of different task scenarios.
We implement UAL in a simple fashion -- adaptively setting the label smoothing value of training according to the uncertainty of individual samples.
Experiments on widely used benchmarks demonstrate that our UAL significantly and consistently outperforms standard supervised fine-tuning.
arXiv Detail & Related papers (2024-06-07T11:37:45Z) - MaskMatch: Boosting Semi-Supervised Learning Through Mask Autoencoder-Driven Feature Learning [8.255082589733673]
algo is a novel algorithm that fully utilizes unlabeled data to boost semi-supervised learning.
algo integrates a self-supervised learning strategy, i.e., Masked Autoencoder (MAE), that uses all available data to enforce the visual representation learning.
algo achieves low error rates of 18.71%, 9.47%, and 3.07%, respectively, on challenging datasets.
arXiv Detail & Related papers (2024-05-10T03:39:54Z) - A Channel-ensemble Approach: Unbiased and Low-variance Pseudo-labels is Critical for Semi-supervised Classification [61.473485511491795]
Semi-supervised learning (SSL) is a practical challenge in computer vision.
Pseudo-label (PL) methods, e.g., FixMatch and FreeMatch, obtain the State Of The Art (SOTA) performances in SSL.
We propose a lightweight channel-based ensemble method to consolidate multiple inferior PLs into the theoretically guaranteed unbiased and low-variance one.
arXiv Detail & Related papers (2024-03-27T09:49:37Z) - Semi-Supervised Learning in the Few-Shot Zero-Shot Scenario [14.916971861796384]
Semi-Supervised Learning (SSL) is a framework that utilizes both labeled and unlabeled data to enhance model performance.
We propose a general approach to augment existing SSL methods, enabling them to handle situations where certain classes are missing.
Our experimental results reveal significant improvements in accuracy when compared to state-of-the-art SSL, open-set SSL, and open-world SSL methods.
arXiv Detail & Related papers (2023-08-27T14:25:07Z) - Dual Class-Aware Contrastive Federated Semi-Supervised Learning [9.742389743497045]
We present a novel Federated Semi-Supervised Learning (FSSL) method called Dual Class-aware Contrastive Federated Semi-Supervised Learning (DCCFSSL)
By implementing a dual class-aware contrastive module, DCCFSSL establishes a unified training objective for different clients to tackle large deviations.
Our experiments show that DCCFSSL outperforms current state-of-the-art methods on three benchmark datasets.
arXiv Detail & Related papers (2022-11-16T13:54:31Z) - Towards Realistic Semi-Supervised Learning [73.59557447798134]
We propose a novel approach to tackle SSL in open-world setting, where we simultaneously learn to classify known and unknown classes.
Our approach substantially outperforms the existing state-of-the-art on seven diverse datasets.
arXiv Detail & Related papers (2022-07-05T19:04:43Z) - SCARF: Self-Supervised Contrastive Learning using Random Feature
Corruption [72.35532598131176]
We propose SCARF, a technique for contrastive learning, where views are formed by corrupting a random subset of features.
We show that SCARF complements existing strategies and outperforms alternatives like autoencoders.
arXiv Detail & Related papers (2021-06-29T08:08:33Z) - In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label
Selection Framework for Semi-Supervised Learning [53.1047775185362]
Pseudo-labeling (PL) is a general SSL approach that does not have this constraint but performs relatively poorly in its original formulation.
We argue that PL underperforms due to the erroneous high confidence predictions from poorly calibrated models.
We propose an uncertainty-aware pseudo-label selection (UPS) framework which improves pseudo labeling accuracy by drastically reducing the amount of noise encountered in the training process.
arXiv Detail & Related papers (2021-01-15T23:29:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.