Queried Unlabeled Data Improves and Robustifies Class-Incremental
Learning
- URL: http://arxiv.org/abs/2206.07842v2
- Date: Fri, 17 Jun 2022 15:57:09 GMT
- Title: Queried Unlabeled Data Improves and Robustifies Class-Incremental
Learning
- Authors: Tianlong Chen, Sijia Liu, Shiyu Chang, Lisa Amini, Zhangyang Wang
- Abstract summary: Class-incremental learning (CIL) suffers from the notorious dilemma between learning newly added classes and preserving previously learned class knowledge.
We propose to leverage "free" external unlabeled data querying in continual learning.
We show queried unlabeled data can continue to benefit, and seamlessly extend CIL-QUD into its robustified versions.
- Score: 133.39254981496146
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Class-incremental learning (CIL) suffers from the notorious dilemma between
learning newly added classes and preserving previously learned class knowledge.
That catastrophic forgetting issue could be mitigated by storing historical
data for replay, which yet would cause memory overheads as well as imbalanced
prediction updates. To address this dilemma, we propose to leverage "free"
external unlabeled data querying in continual learning. We first present a CIL
with Queried Unlabeled Data (CIL-QUD) scheme, where we only store a handful of
past training samples as anchors and use them to query relevant unlabeled
examples each time. Along with new and past stored data, the queried unlabeled
are effectively utilized, through learning-without-forgetting (LwF)
regularizers and class-balance training. Besides preserving model
generalization over past and current tasks, we next study the problem of
adversarial robustness for CIL-QUD. Inspired by the recent success of learning
robust models with unlabeled data, we explore a new robustness-aware CIL
setting, where the learned adversarial robustness has to resist forgetting and
be transferred as new tasks come in continually. While existing options easily
fail, we show queried unlabeled data can continue to benefit, and seamlessly
extend CIL-QUD into its robustified versions, RCIL-QUD. Extensive experiments
demonstrate that CIL-QUD achieves substantial accuracy gains on CIFAR-10 and
CIFAR-100, compared to previous state-of-the-art CIL approaches. Moreover,
RCIL-QUD establishes the first strong milestone for robustness-aware CIL. Codes
are available in https://github.com/VITA-Group/CIL-QUD.
Related papers
- Happy: A Debiased Learning Framework for Continual Generalized Category Discovery [54.54153155039062]
This paper explores the underexplored task of Continual Generalized Category Discovery (C-GCD)
C-GCD aims to incrementally discover new classes from unlabeled data while maintaining the ability to recognize previously learned classes.
We introduce a debiased learning framework, namely Happy, characterized by Hardness-aware prototype sampling and soft entropy regularization.
arXiv Detail & Related papers (2024-10-09T04:18:51Z) - Beyond Prompt Learning: Continual Adapter for Efficient Rehearsal-Free Continual Learning [22.13331870720021]
We propose a beyond prompt learning approach to the RFCL task, called Continual Adapter (C-ADA)
C-ADA flexibly extends specific weights in CAL to learn new knowledge for each task and freezes old weights to preserve prior knowledge.
Our approach achieves significantly improved performance and training speed, outperforming the current state-of-the-art (SOTA) method.
arXiv Detail & Related papers (2024-07-14T17:40:40Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - COOLer: Class-Incremental Learning for Appearance-Based Multiple Object
Tracking [32.47215340215641]
This paper extends the scope of continual learning research to class-incremental learning for multiple object tracking (MOT)
Previous solutions for continual learning of object detectors do not address the data association stage of appearance-based trackers.
We introduce COOLer, a COntrastive- and cOntinual-Learning-based tracker, which incrementally learns to track new categories while preserving past knowledge.
arXiv Detail & Related papers (2023-10-04T17:49:48Z) - Few-Shot Class-Incremental Learning via Entropy-Regularized Data-Free
Replay [52.251188477192336]
Few-shot class-incremental learning (FSCIL) has been proposed aiming to enable a deep learning system to incrementally learn new classes with limited data.
We show through empirical results that adopting the data replay is surprisingly favorable.
We propose using data-free replay that can synthesize data by a generator without accessing real data.
arXiv Detail & Related papers (2022-07-22T17:30:51Z) - ClaRe: Practical Class Incremental Learning By Remembering Previous
Class Representations [9.530976792843495]
Class Incremental Learning (CIL) tends to learn new concepts perfectly, but not at the expense of performance and accuracy for old data.
ClaRe is an efficient solution for CIL by remembering the representations of learned classes in each increment.
ClaRe has a better generalization than prior methods thanks to producing diverse instances from the distribution of previously learned classes.
arXiv Detail & Related papers (2021-03-29T10:39:42Z) - ORDisCo: Effective and Efficient Usage of Incremental Unlabeled Data for
Semi-supervised Continual Learning [52.831894583501395]
Continual learning assumes the incoming data are fully labeled, which might not be applicable in real applications.
We propose deep Online Replay with Discriminator Consistency (ORDisCo) to interdependently learn a classifier with a conditional generative adversarial network (GAN)
We show ORDisCo achieves significant performance improvement on various semi-supervised learning benchmark datasets for SSCL.
arXiv Detail & Related papers (2021-01-02T09:04:14Z) - Learning Adaptive Embedding Considering Incremental Class [55.21855842960139]
Class-Incremental Learning (CIL) aims to train a reliable model with the streaming data, which emerges unknown classes sequentially.
Different from traditional closed set learning, CIL has two main challenges: 1) Novel class detection.
After the novel classes are detected, the model needs to be updated without re-training using entire previous data.
arXiv Detail & Related papers (2020-08-31T04:11:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.