AdaER: An Adaptive Experience Replay Approach for Continual Lifelong
Learning
- URL: http://arxiv.org/abs/2308.03810v2
- Date: Sat, 19 Aug 2023 16:23:09 GMT
- Title: AdaER: An Adaptive Experience Replay Approach for Continual Lifelong
Learning
- Authors: Xingyu Li, Bo Tang, Haifeng Li
- Abstract summary: We present adaptive-experience replay (AdaER) to address the challenge of continual lifelong learning.
AdaER consists of two stages: memory replay and memory update.
Results: AdaER outperforms existing continual lifelong learning baselines.
- Score: 16.457330925212606
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Continual lifelong learning is an machine learning framework inspired by
human learning, where learners are trained to continuously acquire new
knowledge in a sequential manner. However, the non-stationary nature of
streaming training data poses a significant challenge known as catastrophic
forgetting, which refers to the rapid forgetting of previously learned
knowledge when new tasks are introduced. While some approaches, such as
experience replay (ER), have been proposed to mitigate this issue, their
performance remains limited, particularly in the class-incremental scenario
which is considered natural and highly challenging. In this paper, we present a
novel algorithm, called adaptive-experience replay (AdaER), to address the
challenge of continual lifelong learning. AdaER consists of two stages: memory
replay and memory update. In the memory replay stage, AdaER introduces a
contextually-cued memory recall (C-CMR) strategy, which selectively replays
memories that are most conflicting with the current input data in terms of both
data and task. Additionally, AdaER incorporates an entropy-balanced reservoir
sampling (E-BRS) strategy to enhance the performance of the memory buffer by
maximizing information entropy. To evaluate the effectiveness of AdaER, we
conduct experiments on established supervised continual lifelong learning
benchmarks, specifically focusing on class-incremental learning scenarios. The
results demonstrate that AdaER outperforms existing continual lifelong learning
baselines, highlighting its efficacy in mitigating catastrophic forgetting and
improving learning performance.
Related papers
- Adaptive Rentention & Correction for Continual Learning [114.5656325514408]
A common problem in continual learning is the classification layer's bias towards the most recent task.
We name our approach Adaptive Retention & Correction (ARC)
ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets.
arXiv Detail & Related papers (2024-05-23T08:43:09Z) - Replay-enhanced Continual Reinforcement Learning [37.34722105058351]
We introduce RECALL, a replay-enhanced method that greatly improves the plasticity of existing replay-based methods on new tasks.
Experiments on the Continual World benchmark show that RECALL performs significantly better than purely perfect memory replay.
arXiv Detail & Related papers (2023-11-20T06:21:52Z) - Class-Incremental Learning Using Generative Experience Replay Based on
Time-aware Regularization [24.143811670210546]
Generative experience replay addresses the challenge of learning new tasks accumulatively without forgetting.
We introduce a time-aware regularization method to fine-tune the three training objective terms used for generative replay.
Experimental results indicate that our method pushes the limit of brain-inspired continual learners under such strict settings.
arXiv Detail & Related papers (2023-10-05T21:07:45Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - VERSE: Virtual-Gradient Aware Streaming Lifelong Learning with Anytime
Inference [36.61783715563126]
Streaming lifelong learning is a challenging setting of lifelong learning with the goal of continuous learning without forgetting.
We introduce a novel approach to lifelong learning, which is streaming (observes each training example only once)
We propose a novel emphvirtual gradients based approach for continual representation learning which adapts to each new example while also generalizing well on past data to prevent catastrophic forgetting.
arXiv Detail & Related papers (2023-09-15T07:54:49Z) - OER: Offline Experience Replay for Continual Offline Reinforcement Learning [25.985985377992034]
Continuously learning new skills via a sequence of pre-collected offline datasets is desired for an agent.
In this paper, we formulate a new setting, continual offline reinforcement learning (CORL), where an agent learns a sequence of offline reinforcement learning tasks.
We propose a new model-based experience selection scheme to build the replay buffer, where a transition model is learned to approximate the state distribution.
arXiv Detail & Related papers (2023-05-23T08:16:44Z) - Continual Learning with Strong Experience Replay [32.154995019080594]
We propose a CL method with Strong Experience Replay (SER)
SER utilizes future experiences mimicked on the current training data, besides distilling past experience from the memory buffer.
Experimental results on multiple image classification datasets show that our SER method surpasses the state-of-the-art methods by a noticeable margin.
arXiv Detail & Related papers (2023-05-23T02:42:54Z) - Learning with Recoverable Forgetting [77.56338597012927]
Learning wIth Recoverable Forgetting explicitly handles the task- or sample-specific knowledge removal and recovery.
Specifically, LIRF brings in two innovative schemes, namely knowledge deposit and withdrawal.
We conduct experiments on several datasets, and demonstrate that the proposed LIRF strategy yields encouraging results with gratifying generalization capability.
arXiv Detail & Related papers (2022-07-17T16:42:31Z) - Learning Bayesian Sparse Networks with Full Experience Replay for
Continual Learning [54.7584721943286]
Continual Learning (CL) methods aim to enable machine learning models to learn new tasks without catastrophic forgetting of those that have been previously mastered.
Existing CL approaches often keep a buffer of previously-seen samples, perform knowledge distillation, or use regularization techniques towards this goal.
We propose to only activate and select sparse neurons for learning current and past tasks at any stage.
arXiv Detail & Related papers (2022-02-21T13:25:03Z) - Relational Experience Replay: Continual Learning by Adaptively Tuning
Task-wise Relationship [54.73817402934303]
We propose Experience Continual Replay (ERR), a bi-level learning framework to adaptively tune task-wise to achieve a better stability plasticity' tradeoff.
ERR can consistently improve the performance of all baselines and surpass current state-of-the-art methods.
arXiv Detail & Related papers (2021-12-31T12:05:22Z) - Bilevel Continual Learning [76.50127663309604]
We present a novel framework of continual learning named "Bilevel Continual Learning" (BCL)
Our experiments on continual learning benchmarks demonstrate the efficacy of the proposed BCL compared to many state-of-the-art methods.
arXiv Detail & Related papers (2020-07-30T16:00:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.