Practical Recommendations for Replay-based Continual Learning Methods
- URL: http://arxiv.org/abs/2203.10317v1
- Date: Sat, 19 Mar 2022 12:44:44 GMT
- Title: Practical Recommendations for Replay-based Continual Learning Methods
- Authors: Gabriele Merlin and Vincenzo Lomonaco and Andrea Cossu and Antonio
Carta and Davide Bacciu
- Abstract summary: Continual Learning requires the model to learn from a stream of dynamic, non-stationary data without forgetting previous knowledge.
Replay approaches have empirically proved to be the most effective ones.
- Score: 18.559132470835937
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Continual Learning requires the model to learn from a stream of dynamic,
non-stationary data without forgetting previous knowledge. Several approaches
have been developed in the literature to tackle the Continual Learning
challenge. Among them, Replay approaches have empirically proved to be the most
effective ones. Replay operates by saving some samples in memory which are then
used to rehearse knowledge during training in subsequent tasks. However, an
extensive comparison and deeper understanding of different replay
implementation subtleties is still missing in the literature. The aim of this
work is to compare and analyze existing replay-based strategies and provide
practical recommendations on developing efficient, effective and generally
applicable replay-based strategies. In particular, we investigate the role of
the memory size value, different weighting policies and discuss about the
impact of data augmentation, which allows reaching better performance with
lower memory sizes.
Related papers
- AdaER: An Adaptive Experience Replay Approach for Continual Lifelong
Learning [16.457330925212606]
We present adaptive-experience replay (AdaER) to address the challenge of continual lifelong learning.
AdaER consists of two stages: memory replay and memory update.
Results: AdaER outperforms existing continual lifelong learning baselines.
arXiv Detail & Related papers (2023-08-07T01:25:45Z) - Integrating Curricula with Replays: Its Effects on Continual Learning [3.2489082010225494]
Humans engage in learning and reviewing processes with curricula when acquiring new skills or knowledge.
The goal is to emulate the human learning process, thereby improving knowledge retention and facilitating learning transfer.
Existing replay methods in continual learning agents involve the random selection and ordering of data from previous tasks.
arXiv Detail & Related papers (2023-07-08T14:14:55Z) - Continual Learning with Strong Experience Replay [32.154995019080594]
We propose a CL method with Strong Experience Replay (SER)
SER utilizes future experiences mimicked on the current training data, besides distilling past experience from the memory buffer.
Experimental results on multiple image classification datasets show that our SER method surpasses the state-of-the-art methods by a noticeable margin.
arXiv Detail & Related papers (2023-05-23T02:42:54Z) - A baseline on continual learning methods for video action recognition [15.157938674002793]
Continual learning aims to solve long-standing limitations of classic supervisedly-trained models.
We present a benchmark of state-of-the-art continual learning methods on video action recognition.
arXiv Detail & Related papers (2023-04-20T14:20:43Z) - Relational Experience Replay: Continual Learning by Adaptively Tuning
Task-wise Relationship [54.73817402934303]
We propose Experience Continual Replay (ERR), a bi-level learning framework to adaptively tune task-wise to achieve a better stability plasticity' tradeoff.
ERR can consistently improve the performance of all baselines and surpass current state-of-the-art methods.
arXiv Detail & Related papers (2021-12-31T12:05:22Z) - Replay For Safety [51.11953997546418]
In experience replay, past transitions are stored in a memory buffer and re-used during learning.
We show that using an appropriate biased sampling scheme can allow us to achieve a emphsafe policy.
arXiv Detail & Related papers (2021-12-08T11:10:57Z) - An Investigation of Replay-based Approaches for Continual Learning [79.0660895390689]
Continual learning (CL) is a major challenge of machine learning (ML) and describes the ability to learn several tasks sequentially without catastrophic forgetting (CF)
Several solution classes have been proposed, of which so-called replay-based approaches seem very promising due to their simplicity and robustness.
We empirically investigate replay-based approaches of continual learning and assess their potential for applications.
arXiv Detail & Related papers (2021-08-15T15:05:02Z) - Reducing Representation Drift in Online Continual Learning [87.71558506591937]
We study the online continual learning paradigm, where agents must learn from a changing distribution with constrained memory and compute.
In this work we instead focus on the change in representations of previously observed data due to the introduction of previously unobserved class samples in the incoming data stream.
arXiv Detail & Related papers (2021-04-11T15:19:30Z) - Revisiting Fundamentals of Experience Replay [91.24213515992595]
We present a systematic and extensive analysis of experience replay in Q-learning methods.
We focus on two fundamental properties: the replay capacity and the ratio of learning updates to experience collected.
arXiv Detail & Related papers (2020-07-13T21:22:17Z) - Experience Replay with Likelihood-free Importance Weights [123.52005591531194]
We propose to reweight experiences based on their likelihood under the stationary distribution of the current policy.
We apply the proposed approach empirically on two competitive methods, Soft Actor Critic (SAC) and Twin Delayed Deep Deterministic policy gradient (TD3)
arXiv Detail & Related papers (2020-06-23T17:17:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.