DualMatch: Robust Semi-Supervised Learning with Dual-Level Interaction
- URL: http://arxiv.org/abs/2310.16459v1
- Date: Wed, 25 Oct 2023 08:34:05 GMT
- Title: DualMatch: Robust Semi-Supervised Learning with Dual-Level Interaction
- Authors: Cong Wang, Xiaofeng Cao, Lanzhe Guo2, and Zenglin Shi
- Abstract summary: Previous semi-supervised learning methods typically match model predictions of different data-augmented views in a single-level interaction manner.
We propose a novel SSL method called DualMatch, in which the class prediction jointly invokes feature embedding in a dual-level interaction manner.
In the standard SSL setting, the proposal achieves 9% error reduction compared with SOTA methods, even in a more challenging class-imbalanced setting, the proposal can still achieve 6% error reduction.
- Score: 10.775623936099173
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semi-supervised learning provides an expressive framework for exploiting
unlabeled data when labels are insufficient. Previous semi-supervised learning
methods typically match model predictions of different data-augmented views in
a single-level interaction manner, which highly relies on the quality of
pseudo-labels and results in semi-supervised learning not robust. In this
paper, we propose a novel SSL method called DualMatch, in which the class
prediction jointly invokes feature embedding in a dual-level interaction
manner. DualMatch requires consistent regularizations for data augmentation,
specifically, 1) ensuring that different augmented views are regulated with
consistent class predictions, and 2) ensuring that different data of one class
are regulated with similar feature embeddings. Extensive experiments
demonstrate the effectiveness of DualMatch. In the standard SSL setting, the
proposal achieves 9% error reduction compared with SOTA methods, even in a more
challenging class-imbalanced setting, the proposal can still achieve 6% error
reduction. Code is available at https://github.com/CWangAI/DualMatch
Related papers
- JointMatch: A Unified Approach for Diverse and Collaborative
Pseudo-Labeling to Semi-Supervised Text Classification [65.268245109828]
Semi-supervised text classification (SSTC) has gained increasing attention due to its ability to leverage unlabeled data.
Existing approaches based on pseudo-labeling suffer from the issues of pseudo-label bias and error accumulation.
We propose JointMatch, a holistic approach for SSTC that addresses these challenges by unifying ideas from recent semi-supervised learning.
arXiv Detail & Related papers (2023-10-23T05:43:35Z) - Boosting Semi-Supervised Learning by Exploiting All Unlabeled Data [21.6350640726058]
Semi-supervised learning (SSL) has attracted enormous attention due to its vast potential of mitigating the dependence on large labeled datasets.
We propose two novel techniques: Entropy Meaning Loss (EML) and Adaptive Negative Learning (ANL)
We integrate these techniques with FixMatch, and develop a simple yet powerful framework called FullMatch.
arXiv Detail & Related papers (2023-03-20T12:44:11Z) - Adversarial Dual-Student with Differentiable Spatial Warping for
Semi-Supervised Semantic Segmentation [70.2166826794421]
We propose a differentiable geometric warping to conduct unsupervised data augmentation.
We also propose a novel adversarial dual-student framework to improve the Mean-Teacher.
Our solution significantly improves the performance and state-of-the-art results are achieved on both datasets.
arXiv Detail & Related papers (2022-03-05T17:36:17Z) - OpenMatch: Open-set Consistency Regularization for Semi-supervised
Learning with Outliers [71.08167292329028]
We propose a novel Open-set Semi-Supervised Learning (OSSL) approach called OpenMatch.
OpenMatch unifies FixMatch with novelty detection based on one-vs-all (OVA) classifiers.
It achieves state-of-the-art performance on three datasets, and even outperforms a fully supervised model in detecting outliers unseen in unlabeled data on CIFAR10.
arXiv Detail & Related papers (2021-05-28T23:57:15Z) - Improving Calibration for Long-Tailed Recognition [68.32848696795519]
We propose two methods to improve calibration and performance in such scenarios.
For dataset bias due to different samplers, we propose shifted batch normalization.
Our proposed methods set new records on multiple popular long-tailed recognition benchmark datasets.
arXiv Detail & Related papers (2021-04-01T13:55:21Z) - CoMatch: Semi-supervised Learning with Contrastive Graph Regularization [86.84486065798735]
CoMatch is a new semi-supervised learning method that unifies dominant approaches.
It achieves state-of-the-art performance on multiple datasets.
arXiv Detail & Related papers (2020-11-23T02:54:57Z) - FixMatch: Simplifying Semi-Supervised Learning with Consistency and
Confidence [93.91751021370638]
Semi-supervised learning (SSL) provides an effective means of leveraging unlabeled data to improve a model's performance.
In this paper, we demonstrate the power of a simple combination of two common SSL methods: consistency regularization and pseudo-labeling.
Our algorithm, FixMatch, first generates pseudo-labels using the model's predictions on weakly-augmented unlabeled images.
arXiv Detail & Related papers (2020-01-21T18:32:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.