Memory-Efficient Semi-Supervised Continual Learning: The World is its
Own Replay Buffer
- URL: http://arxiv.org/abs/2101.09536v1
- Date: Sat, 23 Jan 2021 17:23:08 GMT
- Title: Memory-Efficient Semi-Supervised Continual Learning: The World is its
Own Replay Buffer
- Authors: James Smith, Jonathan Balloch, Yen-Chang Hsu, Zsolt Kira
- Abstract summary: Rehearsal is a critical component for class-incremental continual learning, yet it requires a substantial memory budget.
Our work investigates whether we can significantly reduce this memory budget by leveraging unlabeled data from an agent's environment.
We show that a strategy built on pseudo-labeling, consistency regularization, Out-of-Distribution (OoD) detection, and knowledge distillation reduces forgetting in this setting.
- Score: 26.85498630152788
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rehearsal is a critical component for class-incremental continual learning,
yet it requires a substantial memory budget. Our work investigates whether we
can significantly reduce this memory budget by leveraging unlabeled data from
an agent's environment in a realistic and challenging continual learning
paradigm. Specifically, we explore and formalize a novel semi-supervised
continual learning (SSCL) setting, where labeled data is scarce yet non-i.i.d.
unlabeled data from the agent's environment is plentiful. Importantly, data
distributions in the SSCL setting are realistic and therefore reflect object
class correlations between, and among, the labeled and unlabeled data
distributions. We show that a strategy built on pseudo-labeling, consistency
regularization, Out-of-Distribution (OoD) detection, and knowledge distillation
reduces forgetting in this setting. Our approach, DistillMatch, increases
performance over the state-of-the-art by no less than 8.7% average task
accuracy and up to a 54.5% increase in average task accuracy in SSCL CIFAR-100
experiments. Moreover, we demonstrate that DistillMatch can save up to 0.23
stored images per processed unlabeled image compared to the next best method
which only saves 0.08. Our results suggest that focusing on realistic
correlated distributions is a significantly new perspective, which accentuates
the importance of leveraging the world's structure as a continual learning
strategy.
Related papers
- Semi-Supervised Regression with Heteroscedastic Pseudo-Labels [50.54050677867914]
We propose an uncertainty-aware pseudo-labeling framework that dynamically adjusts pseudo-label influence from a bi-level optimization perspective.<n>We provide theoretical insights and extensive experiments to validate our approach across various benchmark SSR datasets.
arXiv Detail & Related papers (2025-10-17T03:06:23Z) - A Contrastive Learning-Guided Confident Meta-learning for Zero Shot Anomaly Detection [17.73056562717683]
CoZAD is a novel zero-shot anomaly detection framework.<n>It integrates soft confident learning with meta-learning and contrastive feature representation.<n>We show it outperforms existing methods on 6 out of 7 industrial benchmarks.
arXiv Detail & Related papers (2025-08-25T09:27:31Z) - Building Blocks for Robust and Effective Semi-Supervised Real-World Object Detection [1.188383832081829]
Semi-supervised object detection (SSOD) based on pseudo-labeling significantly reduces dependence on large labeled datasets.
However, real-world applications of SSOD often face critical challenges, including class imbalance, label noise, and labeling errors.
We present an in-depth analysis of SSOD under real-world conditions, uncovering causes of suboptimal pseudo-labeling and key trade-offs between label quality and quantity.
arXiv Detail & Related papers (2025-03-24T17:15:24Z) - Enhancing Image Classification in Small and Unbalanced Datasets through Synthetic Data Augmentation [0.0]
This paper introduces a novel synthetic augmentation strategy using class-specific Variational Autoencoders (VAEs) and latent space to improve discrimination capabilities.
By generating realistic, varied synthetic data that fills feature space gaps, we address issues of data scarcity and class imbalance.
The proposed strategy was tested in a small dataset of 321 images created to train and validate an automatic method for assessing the quality of cleanliness of esophagogastroduodenoscopy images.
arXiv Detail & Related papers (2024-09-16T13:47:52Z) - Dynamic Sub-graph Distillation for Robust Semi-supervised Continual
Learning [52.046037471678005]
We focus on semi-supervised continual learning (SSCL), where the model progressively learns from partially labeled data with unknown categories.
We propose a novel approach called Dynamic Sub-Graph Distillation (DSGD) for semi-supervised continual learning.
arXiv Detail & Related papers (2023-12-27T04:40:12Z) - BAL: Balancing Diversity and Novelty for Active Learning [53.289700543331925]
We introduce a novel framework, Balancing Active Learning (BAL), which constructs adaptive sub-pools to balance diverse and uncertain data.
Our approach outperforms all established active learning methods on widely recognized benchmarks by 1.20%.
arXiv Detail & Related papers (2023-12-26T08:14:46Z) - Exploring the Boundaries of Semi-Supervised Facial Expression Recognition using In-Distribution, Out-of-Distribution, and Unconstrained Data [23.4909421082857]
We present a study on 11 of the most recent semi-supervised methods, in the context of facial expression recognition (FER)
Our investigation covers semi-supervised learning from in-distribution, out-of-distribution, unconstrained, and very small unlabelled data.
With an equal number of labelled samples, semi-supervised learning delivers a considerable improvement over supervised learning.
arXiv Detail & Related papers (2023-06-02T01:40:08Z) - Class-Aware Contrastive Semi-Supervised Learning [51.205844705156046]
We propose a general method named Class-aware Contrastive Semi-Supervised Learning (CCSSL) to improve pseudo-label quality and enhance the model's robustness in the real-world setting.
Our proposed CCSSL has significant performance improvements over the state-of-the-art SSL methods on the standard datasets CIFAR100 and STL10.
arXiv Detail & Related papers (2022-03-04T12:18:23Z) - To be Critical: Self-Calibrated Weakly Supervised Learning for Salient
Object Detection [95.21700830273221]
Weakly-supervised salient object detection (WSOD) aims to develop saliency models using image-level annotations.
We propose a self-calibrated training strategy by explicitly establishing a mutual calibration loop between pseudo labels and network predictions.
We prove that even a much smaller dataset with well-matched annotations can facilitate models to achieve better performance as well as generalizability.
arXiv Detail & Related papers (2021-09-04T02:45:22Z) - SCARF: Self-Supervised Contrastive Learning using Random Feature
Corruption [72.35532598131176]
We propose SCARF, a technique for contrastive learning, where views are formed by corrupting a random subset of features.
We show that SCARF complements existing strategies and outperforms alternatives like autoencoders.
arXiv Detail & Related papers (2021-06-29T08:08:33Z) - Towards Reducing Labeling Cost in Deep Object Detection [61.010693873330446]
We propose a unified framework for active learning, that considers both the uncertainty and the robustness of the detector.
Our method is able to pseudo-label the very confident predictions, suppressing a potential distribution drift.
arXiv Detail & Related papers (2021-06-22T16:53:09Z) - ORDisCo: Effective and Efficient Usage of Incremental Unlabeled Data for
Semi-supervised Continual Learning [52.831894583501395]
Continual learning assumes the incoming data are fully labeled, which might not be applicable in real applications.
We propose deep Online Replay with Discriminator Consistency (ORDisCo) to interdependently learn a classifier with a conditional generative adversarial network (GAN)
We show ORDisCo achieves significant performance improvement on various semi-supervised learning benchmark datasets for SSCL.
arXiv Detail & Related papers (2021-01-02T09:04:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.