Learning Fast, Learning Slow: A General Continual Learning Method based
on Complementary Learning System
- URL: http://arxiv.org/abs/2201.12604v1
- Date: Sat, 29 Jan 2022 15:15:23 GMT
- Title: Learning Fast, Learning Slow: A General Continual Learning Method based
on Complementary Learning System
- Authors: Elahe Arani, Fahad Sarfraz, Bahram Zonooz
- Abstract summary: We propose CLS-ER, a novel dual memory experience replay (ER) method.
New knowledge is acquired while aligning the decision boundaries with the semantic memories.
Our approach achieves state-of-the-art performance on standard benchmarks.
- Score: 13.041607703862724
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Humans excel at continually learning from an ever-changing environment
whereas it remains a challenge for deep neural networks which exhibit
catastrophic forgetting. The complementary learning system (CLS) theory
suggests that the interplay between rapid instance-based learning and slow
structured learning in the brain is crucial for accumulating and retaining
knowledge. Here, we propose CLS-ER, a novel dual memory experience replay (ER)
method which maintains short-term and long-term semantic memories that interact
with the episodic memory. Our method employs an effective replay mechanism
whereby new knowledge is acquired while aligning the decision boundaries with
the semantic memories. CLS-ER does not utilize the task boundaries or make any
assumption about the distribution of the data which makes it versatile and
suited for "general continual learning". Our approach achieves state-of-the-art
performance on standard benchmarks as well as more realistic general continual
learning settings.
Related papers
- Exploiting the Semantic Knowledge of Pre-trained Text-Encoders for Continual Learning [70.64617500380287]
Continual learning allows models to learn from new data while retaining previously learned knowledge.
The semantic knowledge available in the label information of the images, offers important semantic information that can be related with previously acquired knowledge of semantic classes.
We propose integrating semantic guidance within and across tasks by capturing semantic similarity using text embeddings.
arXiv Detail & Related papers (2024-08-02T07:51:44Z) - A Unified and General Framework for Continual Learning [58.72671755989431]
Continual Learning (CL) focuses on learning from dynamic and changing data distributions while retaining previously acquired knowledge.
Various methods have been developed to address the challenge of catastrophic forgetting, including regularization-based, Bayesian-based, and memory-replay-based techniques.
This research aims to bridge this gap by introducing a comprehensive and overarching framework that encompasses and reconciles these existing methodologies.
arXiv Detail & Related papers (2024-03-20T02:21:44Z) - Infinite dSprites for Disentangled Continual Learning: Separating Memory Edits from Generalization [36.23065731463065]
We introduce Infinite dSprites, a parsimonious tool for creating continual classification benchmarks of arbitrary length.
We show that over a sufficiently long time horizon, the performance of all major types of continual learning methods deteriorates on this simple benchmark.
In a simple setting with direct supervision on the generative factors, we show how learning class-agnostic transformations offers a way to circumvent catastrophic forgetting.
arXiv Detail & Related papers (2023-12-27T22:05:42Z) - Saliency-Guided Hidden Associative Replay for Continual Learning [13.551181595881326]
Continual Learning is a burgeoning domain in next-generation AI, focusing on training neural networks over a sequence of tasks akin to human learning.
This paper presents the Saliency Guided Hidden Associative Replay for Continual Learning.
This novel framework synergizes associative memory with replay-based strategies. SHARC primarily archives salient data segments via sparse memory encoding.
arXiv Detail & Related papers (2023-10-06T15:54:12Z) - Learning Bayesian Sparse Networks with Full Experience Replay for
Continual Learning [54.7584721943286]
Continual Learning (CL) methods aim to enable machine learning models to learn new tasks without catastrophic forgetting of those that have been previously mastered.
Existing CL approaches often keep a buffer of previously-seen samples, perform knowledge distillation, or use regularization techniques towards this goal.
We propose to only activate and select sparse neurons for learning current and past tasks at any stage.
arXiv Detail & Related papers (2022-02-21T13:25:03Z) - DualNet: Continual Learning, Fast and Slow [14.902239050081032]
We propose a novel continual learning framework named "DualNet"
It comprises a fast learning system for supervised learning of pattern-separated representation from specific tasks and a slow learning system for unsupervised representation learning of task-agnostic general representation via a Self-Supervised Learning (SSL) technique.
Our experiments show that DualNet outperforms state-of-the-art continual learning methods by a large margin.
arXiv Detail & Related papers (2021-10-01T02:31:59Z) - Incremental Embedding Learning via Zero-Shot Translation [65.94349068508863]
Current state-of-the-art incremental learning methods tackle catastrophic forgetting problem in traditional classification networks.
We propose a novel class-incremental method for embedding network, named as zero-shot translation class-incremental method (ZSTCI)
In addition, ZSTCI can easily be combined with existing regularization-based incremental learning methods to further improve performance of embedding networks.
arXiv Detail & Related papers (2020-12-31T08:21:37Z) - Meta-learning the Learning Trends Shared Across Tasks [123.10294801296926]
Gradient-based meta-learning algorithms excel at quick adaptation to new tasks with limited data.
Existing meta-learning approaches only depend on the current task information during the adaptation.
We propose a 'Path-aware' model-agnostic meta-learning approach.
arXiv Detail & Related papers (2020-10-19T08:06:47Z) - Bilevel Continual Learning [76.50127663309604]
We present a novel framework of continual learning named "Bilevel Continual Learning" (BCL)
Our experiments on continual learning benchmarks demonstrate the efficacy of the proposed BCL compared to many state-of-the-art methods.
arXiv Detail & Related papers (2020-07-30T16:00:23Z) - Neuromodulated Neural Architectures with Local Error Signals for
Memory-Constrained Online Continual Learning [4.2903672492917755]
We develop a biologically-inspired light weight neural network architecture that incorporates local learning and neuromodulation.
We demonstrate the efficacy of our approach on both single task and continual learning setting.
arXiv Detail & Related papers (2020-07-16T07:41:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.