Dealing with Cross-Task Class Discrimination in Online Continual
Learning
- URL: http://arxiv.org/abs/2305.14657v1
- Date: Wed, 24 May 2023 02:52:30 GMT
- Title: Dealing with Cross-Task Class Discrimination in Online Continual
Learning
- Authors: Yiduo Guo, Bing Liu, Dongyan Zhao
- Abstract summary: This paper argues for another challenge in class-incremental learning (CIL)
How to establish decision boundaries between the classes of the new task and old tasks with no (or limited) access to the old task data.
A replay method saves a small amount of data (replay data) from previous tasks. When a batch of current task data arrives, the system jointly trains the new data and some sampled replay data.
This paper argues that the replay approach also has a dynamic training bias issue which reduces the effectiveness of the replay data in solving the CTCD problem.
- Score: 54.31411109376545
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing continual learning (CL) research regards catastrophic forgetting
(CF) as almost the only challenge. This paper argues for another challenge in
class-incremental learning (CIL), which we call cross-task class discrimination
(CTCD),~i.e., how to establish decision boundaries between the classes of the
new task and old tasks with no (or limited) access to the old task data. CTCD
is implicitly and partially dealt with by replay-based methods. A replay method
saves a small amount of data (replay data) from previous tasks. When a batch of
current task data arrives, the system jointly trains the new data and some
sampled replay data. The replay data enables the system to partially learn the
decision boundaries between the new classes and the old classes as the amount
of the saved data is small. However, this paper argues that the replay approach
also has a dynamic training bias issue which reduces the effectiveness of the
replay data in solving the CTCD problem. A novel optimization objective with a
gradient-based adaptive method is proposed to dynamically deal with the problem
in the online CL process. Experimental results show that the new method
achieves much better results in online CL.
Related papers
- Reducing Catastrophic Forgetting in Online Class Incremental Learning Using Self-Distillation [3.8506666685467343]
In continual learning, previous knowledge is forgotten when a model learns new tasks.
In this paper, we tried to solve this problem by acquiring transferable knowledge through self-distillation.
Our proposed method outperformed conventional methods by experiments on CIFAR10, CIFAR100, and MiniimageNet datasets.
arXiv Detail & Related papers (2024-09-17T16:26:33Z) - Replay Consolidation with Label Propagation for Continual Object Detection [7.454468349023651]
Continual Learning for Object Detection poses additional difficulties compared to CL for Classification.
In CLOD, images from previous tasks may contain unknown classes that could reappear labeled in future tasks.
We propose a novel technique to solve CLOD called Replay Consolidation with Label Propagation for Object Detection.
arXiv Detail & Related papers (2024-09-09T14:16:27Z) - Adaptive Rentention & Correction for Continual Learning [114.5656325514408]
A common problem in continual learning is the classification layer's bias towards the most recent task.
We name our approach Adaptive Retention & Correction (ARC)
ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets.
arXiv Detail & Related papers (2024-05-23T08:43:09Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - DST-Det: Simple Dynamic Self-Training for Open-Vocabulary Object Detection [72.25697820290502]
This work introduces a straightforward and efficient strategy to identify potential novel classes through zero-shot classification.
We refer to this approach as the self-training strategy, which enhances recall and accuracy for novel classes without requiring extra annotations, datasets, and re-training.
Empirical evaluations on three datasets, including LVIS, V3Det, and COCO, demonstrate significant improvements over the baseline performance.
arXiv Detail & Related papers (2023-10-02T17:52:24Z) - OER: Offline Experience Replay for Continual Offline Reinforcement Learning [25.985985377992034]
Continuously learning new skills via a sequence of pre-collected offline datasets is desired for an agent.
In this paper, we formulate a new setting, continual offline reinforcement learning (CORL), where an agent learns a sequence of offline reinforcement learning tasks.
We propose a new model-based experience selection scheme to build the replay buffer, where a transition model is learned to approximate the state distribution.
arXiv Detail & Related papers (2023-05-23T08:16:44Z) - vCLIMB: A Novel Video Class Incremental Learning Benchmark [53.90485760679411]
We introduce vCLIMB, a novel video continual learning benchmark.
vCLIMB is a standardized test-bed to analyze catastrophic forgetting of deep models in video continual learning.
We propose a temporal consistency regularization that can be applied on top of memory-based continual learning methods.
arXiv Detail & Related papers (2022-01-23T22:14:17Z) - Online Continual Learning Via Candidates Voting [7.704949298975352]
We introduce an effective and memory-efficient method for online continual learning under class-incremental setting.
Our proposed method achieves the best results under different benchmark datasets for online continual learning including CIFAR-10, CIFAR-100 and CORE-50.
arXiv Detail & Related papers (2021-10-17T15:45:32Z) - An Investigation of Replay-based Approaches for Continual Learning [79.0660895390689]
Continual learning (CL) is a major challenge of machine learning (ML) and describes the ability to learn several tasks sequentially without catastrophic forgetting (CF)
Several solution classes have been proposed, of which so-called replay-based approaches seem very promising due to their simplicity and robustness.
We empirically investigate replay-based approaches of continual learning and assess their potential for applications.
arXiv Detail & Related papers (2021-08-15T15:05:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.