Relieving the Plateau: Active Semi-Supervised Learning for a Better
Landscape
- URL: http://arxiv.org/abs/2104.03525v1
- Date: Thu, 8 Apr 2021 06:03:59 GMT
- Title: Relieving the Plateau: Active Semi-Supervised Learning for a Better
Landscape
- Authors: Seo Taek Kong, Soomin Jeon, Jaewon Lee, Hongseok Lee, Kyu-Hwan Jung
- Abstract summary: Semi-supervised learning (SSL) leverages unlabeled data that are more accessible than their labeled counterparts.
Active learning (AL) selects unlabeled instances to be annotated by a human-in-the-loop in hopes of better performance with less labeled data.
We propose convergence rate control (CRC), an AL algorithm that selects unlabeled data to improve the problem conditioning upon inclusion to the labeled set.
- Score: 2.3046646540823916
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning (DL) relies on massive amounts of labeled data, and improving
its labeled sample-efficiency remains one of the most important problems since
its advent. Semi-supervised learning (SSL) leverages unlabeled data that are
more accessible than their labeled counterparts. Active learning (AL) selects
unlabeled instances to be annotated by a human-in-the-loop in hopes of better
performance with less labeled data. Given the accessible pool of unlabeled data
in pool-based AL, it seems natural to use SSL when training and AL to update
the labeled set; however, algorithms designed for their combination remain
limited. In this work, we first prove that convergence of gradient descent on
sufficiently wide ReLU networks can be expressed in terms of their Gram matrix'
eigen-spectrum. Equipped with a few theoretical insights, we propose
convergence rate control (CRC), an AL algorithm that selects unlabeled data to
improve the problem conditioning upon inclusion to the labeled set, by
formulating an acquisition step in terms of improving training dynamics.
Extensive experiments show that SSL algorithms coupled with CRC can achieve
high performance using very few labeled data.
Related papers
- Learning Label Refinement and Threshold Adjustment for Imbalanced Semi-Supervised Learning [6.904448748214652]
Semi-supervised learning algorithms struggle to perform well when exposed to imbalanced training data.
We introduce SEmi-supervised learning with pseudo-label optimization based on VALidation data (SEVAL)
SEVAL adapts to specific tasks with improved pseudo-labels accuracy and ensures pseudo-labels correctness on a per-class basis.
arXiv Detail & Related papers (2024-07-07T13:46:22Z) - FlatMatch: Bridging Labeled Data and Unlabeled Data with Cross-Sharpness
for Semi-Supervised Learning [73.13448439554497]
Semi-Supervised Learning (SSL) has been an effective way to leverage abundant unlabeled data with extremely scarce labeled data.
Most SSL methods are commonly based on instance-wise consistency between different data transformations.
We propose FlatMatch which minimizes a cross-sharpness measure to ensure consistent learning performance between the two datasets.
arXiv Detail & Related papers (2023-10-25T06:57:59Z) - Active Semi-Supervised Learning by Exploring Per-Sample Uncertainty and
Consistency [30.94964727745347]
We propose a method called Active Semi-supervised Learning (ASSL) to improve accuracy of models at a lower cost.
ASSL involves more dynamic model updates than Active Learning (AL) due to the use of unlabeled data.
ASSL achieved about 5.3 times higher computational efficiency than Semi-supervised Learning (SSL) while achieving the same performance.
arXiv Detail & Related papers (2023-03-15T22:58:23Z) - Contrastive Credibility Propagation for Reliable Semi-Supervised Learning [6.014538614447467]
We propose Contrastive Credibility Propagation (CCP) for deep SSL via iterative transductive pseudo-label refinement.
CCP unifies semi-supervised learning and noisy label learning for the goal of reliably outperforming a supervised baseline in any data scenario.
arXiv Detail & Related papers (2022-11-17T23:01:47Z) - Collaborative Intelligence Orchestration: Inconsistency-Based Fusion of
Semi-Supervised Learning and Active Learning [60.26659373318915]
Active learning (AL) and semi-supervised learning (SSL) are two effective, but often isolated, means to alleviate the data-hungry problem.
We propose an innovative Inconsistency-based virtual aDvErial algorithm to further investigate SSL-AL's potential superiority.
Two real-world case studies visualize the practical industrial value of applying and deploying the proposed data sampling algorithm.
arXiv Detail & Related papers (2022-06-07T13:28:43Z) - Dash: Semi-Supervised Learning with Dynamic Thresholding [72.74339790209531]
We propose a semi-supervised learning (SSL) approach that uses unlabeled examples to train models.
Our proposed approach, Dash, enjoys its adaptivity in terms of unlabeled data selection.
arXiv Detail & Related papers (2021-09-01T23:52:29Z) - Self-Tuning for Data-Efficient Deep Learning [75.34320911480008]
Self-Tuning is a novel approach to enable data-efficient deep learning.
It unifies the exploration of labeled and unlabeled data and the transfer of a pre-trained model.
It outperforms its SSL and TL counterparts on five tasks by sharp margins.
arXiv Detail & Related papers (2021-02-25T14:56:19Z) - In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label
Selection Framework for Semi-Supervised Learning [53.1047775185362]
Pseudo-labeling (PL) is a general SSL approach that does not have this constraint but performs relatively poorly in its original formulation.
We argue that PL underperforms due to the erroneous high confidence predictions from poorly calibrated models.
We propose an uncertainty-aware pseudo-label selection (UPS) framework which improves pseudo labeling accuracy by drastically reducing the amount of noise encountered in the training process.
arXiv Detail & Related papers (2021-01-15T23:29:57Z) - Distribution Aligning Refinery of Pseudo-label for Imbalanced
Semi-supervised Learning [126.31716228319902]
We develop Distribution Aligning Refinery of Pseudo-label (DARP) algorithm.
We show that DARP is provably and efficiently compatible with state-of-the-art SSL schemes.
arXiv Detail & Related papers (2020-07-17T09:16:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.