Knowledge Consolidation based Class Incremental Online Learning with
Limited Data
- URL: http://arxiv.org/abs/2106.06795v1
- Date: Sat, 12 Jun 2021 15:18:29 GMT
- Title: Knowledge Consolidation based Class Incremental Online Learning with
Limited Data
- Authors: Mohammed Asad Karim, Vinay Kumar Verma, Pravendra Singh, Vinay
Namboodiri, Piyush Rai
- Abstract summary: We propose a novel approach for class incremental online learning in a limited data setting.
We learn robust representations that are generalizable across tasks without suffering from the problems of catastrophic forgetting and overfitting.
- Score: 41.87919913719975
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel approach for class incremental online learning in a
limited data setting. This problem setting is challenging because of the
following constraints: (1) Classes are given incrementally, which necessitates
a class incremental learning approach; (2) Data for each class is given in an
online fashion, i.e., each training example is seen only once during training;
(3) Each class has very few training examples; and (4) We do not use or assume
access to any replay/memory to store data from previous classes. Therefore, in
this setting, we have to handle twofold problems of catastrophic forgetting and
overfitting. In our approach, we learn robust representations that are
generalizable across tasks without suffering from the problems of catastrophic
forgetting and overfitting to accommodate future classes with limited samples.
Our proposed method leverages the meta-learning framework with knowledge
consolidation. The meta-learning framework helps the model for rapid learning
when samples appear in an online fashion. Simultaneously, knowledge
consolidation helps to learn a robust representation against forgetting under
online updates to facilitate future learning. Our approach significantly
outperforms other methods on several benchmarks.
Related papers
- RanDumb: A Simple Approach that Questions the Efficacy of Continual Representation Learning [68.42776779425978]
We show that existing online continually trained deep networks produce inferior representations compared to a simple pre-defined random transforms.
We then train a simple linear classifier on top without storing any exemplars, processing one sample at a time in an online continual learning setting.
Our study reveals the significant limitations of representation learning, particularly in low-exemplar and online continual learning scenarios.
arXiv Detail & Related papers (2024-02-13T22:07:29Z) - Learning from One Continuous Video Stream [70.30084026960819]
We introduce a framework for online learning from a single continuous video stream.
This poses great challenges given the high correlation between consecutive video frames.
We employ pixel-to-pixel modelling as a practical and flexible way to switch between pre-training and single-stream evaluation.
arXiv Detail & Related papers (2023-12-01T14:03:30Z) - Federated Unlearning via Active Forgetting [24.060724751342047]
We propose a novel federated unlearning framework based on incremental learning.
Our framework differs from existing federated unlearning methods that rely on approximate retraining or data influence estimation.
arXiv Detail & Related papers (2023-07-07T03:07:26Z) - Complementary Learning Subnetworks for Parameter-Efficient
Class-Incremental Learning [40.13416912075668]
We propose a rehearsal-free CIL approach that learns continually via the synergy between two Complementary Learning Subnetworks.
Our method achieves competitive results against state-of-the-art methods, especially in accuracy gain, memory cost, training efficiency, and task-order.
arXiv Detail & Related papers (2023-06-21T01:43:25Z) - vCLIMB: A Novel Video Class Incremental Learning Benchmark [53.90485760679411]
We introduce vCLIMB, a novel video continual learning benchmark.
vCLIMB is a standardized test-bed to analyze catastrophic forgetting of deep models in video continual learning.
We propose a temporal consistency regularization that can be applied on top of memory-based continual learning methods.
arXiv Detail & Related papers (2022-01-23T22:14:17Z) - Online Continual Learning with Natural Distribution Shifts: An Empirical
Study with Visual Data [101.6195176510611]
"Online" continual learning enables evaluating both information retention and online learning efficacy.
In online continual learning, each incoming small batch of data is first used for testing and then added to the training set, making the problem truly online.
We introduce a new benchmark for online continual visual learning that exhibits large scale and natural distribution shifts.
arXiv Detail & Related papers (2021-08-20T06:17:20Z) - Data-efficient Online Classification with Siamese Networks and Active
Learning [11.501721946030779]
We investigate learning from limited labelled, nonstationary and imbalanced data in online classification.
We propose a learning method that synergistically combines siamese neural networks and active learning.
Our study shows that the proposed method is robust to data nonstationarity and imbalance, and significantly outperforms baselines and state-of-the-art algorithms in terms of both learning speed and performance.
arXiv Detail & Related papers (2020-10-04T19:07:19Z) - Incremental Learning In Online Scenario [8.885829189810197]
Current state-of-the-art incremental learning methods require a long time to train the model whenever new classes are added.
We propose an incremental learning framework that can work in the challenging online learning scenario.
arXiv Detail & Related papers (2020-03-30T02:24:26Z) - Provable Meta-Learning of Linear Representations [114.656572506859]
We provide fast, sample-efficient algorithms to address the dual challenges of learning a common set of features from multiple, related tasks, and transferring this knowledge to new, unseen tasks.
We also provide information-theoretic lower bounds on the sample complexity of learning these linear features.
arXiv Detail & Related papers (2020-02-26T18:21:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.