Online Continual Learning Via Candidates Voting
- URL: http://arxiv.org/abs/2110.08855v1
- Date: Sun, 17 Oct 2021 15:45:32 GMT
- Title: Online Continual Learning Via Candidates Voting
- Authors: Jiangpeng He and Fengqing Zhu
- Abstract summary: We introduce an effective and memory-efficient method for online continual learning under class-incremental setting.
Our proposed method achieves the best results under different benchmark datasets for online continual learning including CIFAR-10, CIFAR-100 and CORE-50.
- Score: 7.704949298975352
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Continual learning in online scenario aims to learn a sequence of new tasks
from data stream using each data only once for training, which is more
realistic than in offline mode assuming data from new task are all available.
However, this problem is still under-explored for the challenging
class-incremental setting in which the model classifies all classes seen so far
during inference. Particularly, performance struggles with increased number of
tasks or additional classes to learn for each task. In addition, most existing
methods require storing original data as exemplars for knowledge replay, which
may not be feasible for certain applications with limited memory budget or
privacy concerns. In this work, we introduce an effective and memory-efficient
method for online continual learning under class-incremental setting through
candidates selection from each learned task together with prior incorporation
using stored feature embeddings instead of original data as exemplars. Our
proposed method implemented for image classification task achieves the best
results under different benchmark datasets for online continual learning
including CIFAR-10, CIFAR-100 and CORE-50 while requiring much less memory
resource compared with existing works.
Related papers
- Adaptive Rentention & Correction for Continual Learning [114.5656325514408]
A common problem in continual learning is the classification layer's bias towards the most recent task.
We name our approach Adaptive Retention & Correction (ARC)
ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets.
arXiv Detail & Related papers (2024-05-23T08:43:09Z) - Complementary Learning Subnetworks for Parameter-Efficient
Class-Incremental Learning [40.13416912075668]
We propose a rehearsal-free CIL approach that learns continually via the synergy between two Complementary Learning Subnetworks.
Our method achieves competitive results against state-of-the-art methods, especially in accuracy gain, memory cost, training efficiency, and task-order.
arXiv Detail & Related papers (2023-06-21T01:43:25Z) - Dealing with Cross-Task Class Discrimination in Online Continual
Learning [54.31411109376545]
This paper argues for another challenge in class-incremental learning (CIL)
How to establish decision boundaries between the classes of the new task and old tasks with no (or limited) access to the old task data.
A replay method saves a small amount of data (replay data) from previous tasks. When a batch of current task data arrives, the system jointly trains the new data and some sampled replay data.
This paper argues that the replay approach also has a dynamic training bias issue which reduces the effectiveness of the replay data in solving the CTCD problem.
arXiv Detail & Related papers (2023-05-24T02:52:30Z) - Reinforced Meta Active Learning [11.913086438671357]
We present an online stream-based meta active learning method which learns on the fly an informativeness measure directly from the data.
The method is based on reinforcement learning and combines episodic policy search and a contextual bandits approach.
We demonstrate on several real datasets that this method learns to select training samples more efficiently than existing state-of-the-art methods.
arXiv Detail & Related papers (2022-03-09T08:36:54Z) - Exemplar-free Online Continual Learning [7.800379384628357]
Continual learning aims to learn new tasks from sequentially available data under the condition that each data is observed only once by the learner.
Recent works have made remarkable achievements by storing part of learned task data as exemplars for knowledge replay.
We propose a novel exemplar-free method by leveraging nearest-class-mean (NCM) classifier.
arXiv Detail & Related papers (2022-02-11T08:03:22Z) - vCLIMB: A Novel Video Class Incremental Learning Benchmark [53.90485760679411]
We introduce vCLIMB, a novel video continual learning benchmark.
vCLIMB is a standardized test-bed to analyze catastrophic forgetting of deep models in video continual learning.
We propose a temporal consistency regularization that can be applied on top of memory-based continual learning methods.
arXiv Detail & Related papers (2022-01-23T22:14:17Z) - Online Continual Learning with Natural Distribution Shifts: An Empirical
Study with Visual Data [101.6195176510611]
"Online" continual learning enables evaluating both information retention and online learning efficacy.
In online continual learning, each incoming small batch of data is first used for testing and then added to the training set, making the problem truly online.
We introduce a new benchmark for online continual visual learning that exhibits large scale and natural distribution shifts.
arXiv Detail & Related papers (2021-08-20T06:17:20Z) - Knowledge Consolidation based Class Incremental Online Learning with
Limited Data [41.87919913719975]
We propose a novel approach for class incremental online learning in a limited data setting.
We learn robust representations that are generalizable across tasks without suffering from the problems of catastrophic forgetting and overfitting.
arXiv Detail & Related papers (2021-06-12T15:18:29Z) - Bilevel Continual Learning [76.50127663309604]
We present a novel framework of continual learning named "Bilevel Continual Learning" (BCL)
Our experiments on continual learning benchmarks demonstrate the efficacy of the proposed BCL compared to many state-of-the-art methods.
arXiv Detail & Related papers (2020-07-30T16:00:23Z) - Incremental Learning In Online Scenario [8.885829189810197]
Current state-of-the-art incremental learning methods require a long time to train the model whenever new classes are added.
We propose an incremental learning framework that can work in the challenging online learning scenario.
arXiv Detail & Related papers (2020-03-30T02:24:26Z) - iTAML: An Incremental Task-Agnostic Meta-learning Approach [123.10294801296926]
Humans can continuously learn new knowledge as their experience grows.
Previous learning in deep neural networks can quickly fade out when they are trained on a new task.
We introduce a novel meta-learning approach that seeks to maintain an equilibrium between all encountered tasks.
arXiv Detail & Related papers (2020-03-25T21:42:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.