Divide-and-Conquer for Enhancing Unlabeled Learning, Stability, and Plasticity in Semi-supervised Continual Learning
- URL: http://arxiv.org/abs/2508.05316v1
- Date: Thu, 07 Aug 2025 12:19:23 GMT
- Title: Divide-and-Conquer for Enhancing Unlabeled Learning, Stability, and Plasticity in Semi-supervised Continual Learning
- Authors: Yue Duan, Taicai Chen, Lei Qi, Yinghuan Shi,
- Abstract summary: Semi-supervised continual learning (SSCL) seeks to leverage both labeled and unlabeled data in a sequential learning setup.<n>This work presents USP, a divide-and-conquer framework designed to synergistically enhance three aspects of SSCL.<n> USP outperforms prior SSCL methods, with gains up to 5.94% in the last accuracy, validating its effectiveness.
- Score: 24.429184628642012
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Semi-supervised continual learning (SSCL) seeks to leverage both labeled and unlabeled data in a sequential learning setup, aiming to reduce annotation costs while managing continual data arrival. SSCL introduces complex challenges, including ensuring effective unlabeled learning (UL), while balancing memory stability (MS) and learning plasticity (LP). Previous SSCL efforts have typically focused on isolated aspects of the three, while this work presents USP, a divide-and-conquer framework designed to synergistically enhance these three aspects: (1) Feature Space Reservation (FSR) strategy for LP, which constructs reserved feature locations for future classes by shaping old classes into an equiangular tight frame; (2) Divide-and-Conquer Pseudo-labeling (DCP) approach for UL, which assigns reliable pseudo-labels across both high- and low-confidence unlabeled data; and (3) Class-mean-anchored Unlabeled Distillation (CUD) for MS, which reuses DCP's outputs to anchor unlabeled data to stable class means for distillation to prevent forgetting. Comprehensive evaluations show USP outperforms prior SSCL methods, with gains up to 5.94% in the last accuracy, validating its effectiveness. The code is available at https://github.com/NJUyued/USP4SSCL.
Related papers
- 3DResT: A Strong Baseline for Semi-Supervised 3D Referring Expression Segmentation [73.877177695218]
3D Referring Expression (3D-RES) typically requires extensive instance-level annotations, which are time-consuming and costly.<n>Semi-supervised learning (SSL) mitigates this by using limited labeled data alongside abundant unlabeled data, improving performance while reducing annotation costs.<n>In this paper, we introduce the first semi-supervised learning framework for 3D-RES, presenting a robust baseline method named 3DResT.
arXiv Detail & Related papers (2025-04-17T02:50:52Z) - CRMSP: A Semi-supervised Approach for Key Information Extraction with Class-Rebalancing and Merged Semantic Pseudo-Labeling [10.886757419138343]
We propose a novel semi-supervised approach for KIE with Class-Rebalancing and Merged Semantic Pseudo-Labeling ( CRMSP)
CRP module introduces a reweighting factor to rebalance pseudo-labels, increasing attention to tail classes.
MSP module clusters tail features of unlabeled data by assigning samples to Merged Prototypes (MP)
arXiv Detail & Related papers (2024-07-19T07:41:26Z) - Continual Learning on a Diet: Learning from Sparsely Labeled Streams Under Constrained Computation [123.4883806344334]
We study a realistic Continual Learning setting where learning algorithms are granted a restricted computational budget per time step while training.
We apply this setting to large-scale semi-supervised Continual Learning scenarios with sparse label rates.
Our extensive analysis and ablations demonstrate that DietCL is stable under a full spectrum of label sparsity, computational budget, and various other ablations.
arXiv Detail & Related papers (2024-04-19T10:10:39Z) - Dynamic Sub-graph Distillation for Robust Semi-supervised Continual Learning [47.64252639582435]
We focus on semi-supervised continual learning (SSCL), where the model progressively learns from partially labeled data with unknown categories.<n>We propose a novel approach called Dynamic Sub-Graph Distillation (DSGD) for semi-supervised continual learning.
arXiv Detail & Related papers (2023-12-27T04:40:12Z) - Roll With the Punches: Expansion and Shrinkage of Soft Label Selection
for Semi-supervised Fine-Grained Learning [42.71454054383897]
We propose Soft Label Selection with Confidence-Aware Clustering based on Class Transition Tracking (SoC)
Our approach demonstrates its superior performance in SS-FGVC.
arXiv Detail & Related papers (2023-12-19T15:22:37Z) - ProtoCon: Pseudo-label Refinement via Online Clustering and Prototypical
Consistency for Efficient Semi-supervised Learning [60.57998388590556]
ProtoCon is a novel method for confidence-based pseudo-labeling.
Online nature of ProtoCon allows it to utilise the label history of the entire dataset in one training cycle.
It delivers significant gains and faster convergence over state-of-the-art datasets.
arXiv Detail & Related papers (2023-03-22T23:51:54Z) - Uncertainty-Aware Distillation for Semi-Supervised Few-Shot
Class-Incremental Learning [16.90277839119862]
We present a framework named Uncertainty-aware Distillation with Class-Equilibrium (UaD-CE)
We introduce the CE module that employs a class-balanced self-training to avoid the gradual dominance of easy-to-classified classes on pseudo-label generation.
Comprehensive experiments on three benchmark datasets demonstrate that our method can boost the adaptability of unlabeled data.
arXiv Detail & Related papers (2023-01-24T12:53:06Z) - Contrastive Credibility Propagation for Reliable Semi-Supervised Learning [6.014538614447467]
We propose Contrastive Credibility Propagation (CCP) for deep SSL via iterative transductive pseudo-label refinement.
CCP unifies semi-supervised learning and noisy label learning for the goal of reliably outperforming a supervised baseline in any data scenario.
arXiv Detail & Related papers (2022-11-17T23:01:47Z) - Open-Set Semi-Supervised Learning for 3D Point Cloud Understanding [62.17020485045456]
It is commonly assumed in semi-supervised learning (SSL) that the unlabeled data are drawn from the same distribution as that of the labeled ones.
We propose to selectively utilize unlabeled data through sample weighting, so that only conducive unlabeled data would be prioritized.
arXiv Detail & Related papers (2022-05-02T16:09:17Z) - In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label
Selection Framework for Semi-Supervised Learning [53.1047775185362]
Pseudo-labeling (PL) is a general SSL approach that does not have this constraint but performs relatively poorly in its original formulation.
We argue that PL underperforms due to the erroneous high confidence predictions from poorly calibrated models.
We propose an uncertainty-aware pseudo-label selection (UPS) framework which improves pseudo labeling accuracy by drastically reducing the amount of noise encountered in the training process.
arXiv Detail & Related papers (2021-01-15T23:29:57Z) - ORDisCo: Effective and Efficient Usage of Incremental Unlabeled Data for
Semi-supervised Continual Learning [52.831894583501395]
Continual learning assumes the incoming data are fully labeled, which might not be applicable in real applications.
We propose deep Online Replay with Discriminator Consistency (ORDisCo) to interdependently learn a classifier with a conditional generative adversarial network (GAN)
We show ORDisCo achieves significant performance improvement on various semi-supervised learning benchmark datasets for SSCL.
arXiv Detail & Related papers (2021-01-02T09:04:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.