RankMatch: A Novel Approach to Semi-Supervised Label Distribution
Learning Leveraging Inter-label Correlations
- URL: http://arxiv.org/abs/2312.06343v1
- Date: Mon, 11 Dec 2023 12:47:29 GMT
- Title: RankMatch: A Novel Approach to Semi-Supervised Label Distribution
Learning Leveraging Inter-label Correlations
- Authors: Kouzhiqiang Yucheng Xie, Jing Wang, Yuheng Jia, Boyu Shi, Xin Geng
- Abstract summary: This paper introduces RankMatch, an innovative approach for Semi-Supervised Label Distribution Learning (SSLDL)
RankMatch effectively utilizes a small number of labeled examples in conjunction with a larger quantity of unlabeled data.
We establish a theoretical generalization bound for RankMatch, and through extensive experiments, demonstrate its superiority in performance against existing SSLDL methods.
- Score: 52.549807652527306
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces RankMatch, an innovative approach for Semi-Supervised
Label Distribution Learning (SSLDL). Addressing the challenge of limited
labeled data, RankMatch effectively utilizes a small number of labeled examples
in conjunction with a larger quantity of unlabeled data, reducing the need for
extensive manual labeling in Deep Neural Network (DNN) applications.
Specifically, RankMatch introduces an ensemble learning-inspired averaging
strategy that creates a pseudo-label distribution from multiple weakly
augmented images. This not only stabilizes predictions but also enhances the
model's robustness. Beyond this, RankMatch integrates a pairwise relevance
ranking (PRR) loss, capturing the complex inter-label correlations and ensuring
that the predicted label distributions align with the ground truth.
We establish a theoretical generalization bound for RankMatch, and through
extensive experiments, demonstrate its superiority in performance against
existing SSLDL methods.
Related papers
- Towards Realistic Long-tailed Semi-supervised Learning in an Open World [0.0]
We construct a more emphRealistic Open-world Long-tailed Semi-supervised Learning (textbfROLSSL) setting where there is no premise on the distribution relationships between known and novel categories.
Under the proposed ROLSSL setting, we propose a simple yet potentially effective solution called dual-stage logit adjustments.
Experiments on datasets such as CIFAR100 and ImageNet100 have demonstrated performance improvements of up to 50.1%.
arXiv Detail & Related papers (2024-05-23T12:53:50Z) - JointMatch: A Unified Approach for Diverse and Collaborative
Pseudo-Labeling to Semi-Supervised Text Classification [65.268245109828]
Semi-supervised text classification (SSTC) has gained increasing attention due to its ability to leverage unlabeled data.
Existing approaches based on pseudo-labeling suffer from the issues of pseudo-label bias and error accumulation.
We propose JointMatch, a holistic approach for SSTC that addresses these challenges by unifying ideas from recent semi-supervised learning.
arXiv Detail & Related papers (2023-10-23T05:43:35Z) - Class-Distribution-Aware Pseudo Labeling for Semi-Supervised Multi-Label
Learning [97.88458953075205]
Pseudo-labeling has emerged as a popular and effective approach for utilizing unlabeled data.
This paper proposes a novel solution called Class-Aware Pseudo-Labeling (CAP) that performs pseudo-labeling in a class-aware manner.
arXiv Detail & Related papers (2023-05-04T12:52:18Z) - Boosting Semi-Supervised Learning by Exploiting All Unlabeled Data [21.6350640726058]
Semi-supervised learning (SSL) has attracted enormous attention due to its vast potential of mitigating the dependence on large labeled datasets.
We propose two novel techniques: Entropy Meaning Loss (EML) and Adaptive Negative Learning (ANL)
We integrate these techniques with FixMatch, and develop a simple yet powerful framework called FullMatch.
arXiv Detail & Related papers (2023-03-20T12:44:11Z) - Dense FixMatch: a simple semi-supervised learning method for pixel-wise
prediction tasks [68.36996813591425]
We propose Dense FixMatch, a simple method for online semi-supervised learning of dense and structured prediction tasks.
We enable the application of FixMatch in semi-supervised learning problems beyond image classification by adding a matching operation on the pseudo-labels.
Dense FixMatch significantly improves results compared to supervised learning using only labeled data, approaching its performance with 1/4 of the labeled samples.
arXiv Detail & Related papers (2022-10-18T15:02:51Z) - AggMatch: Aggregating Pseudo Labels for Semi-Supervised Learning [25.27527138880104]
Semi-supervised learning has proven to be an effective paradigm for leveraging a huge amount of unlabeled data.
We introduce AggMatch, which refines initial pseudo labels by using different confident instances.
We conduct experiments to demonstrate the effectiveness of AggMatch over the latest methods on standard benchmarks.
arXiv Detail & Related papers (2022-01-25T16:41:54Z) - OpenMatch: Open-set Consistency Regularization for Semi-supervised
Learning with Outliers [71.08167292329028]
We propose a novel Open-set Semi-Supervised Learning (OSSL) approach called OpenMatch.
OpenMatch unifies FixMatch with novelty detection based on one-vs-all (OVA) classifiers.
It achieves state-of-the-art performance on three datasets, and even outperforms a fully supervised model in detecting outliers unseen in unlabeled data on CIFAR10.
arXiv Detail & Related papers (2021-05-28T23:57:15Z) - Delving Deep into Label Smoothing [112.24527926373084]
Label smoothing is an effective regularization tool for deep neural networks (DNNs)
We present an Online Label Smoothing (OLS) strategy, which generates soft labels based on the statistics of the model prediction for the target category.
arXiv Detail & Related papers (2020-11-25T08:03:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.