A Benchmark and Empirical Analysis for Replay Strategies in Continual
Learning
- URL: http://arxiv.org/abs/2208.02660v1
- Date: Thu, 4 Aug 2022 13:48:11 GMT
- Title: A Benchmark and Empirical Analysis for Replay Strategies in Continual
Learning
- Authors: Qihan Yang, Fan Feng, Rosa Chan
- Abstract summary: Computational systems are not, in general, capable of learning tasks sequentially.
This paper makes an in-depth evaluation of the memory replay methods.
All experiments are conducted on multiple datasets under various domains.
- Score: 2.922007656878633
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the capacity of continual learning, humans can continuously acquire
knowledge throughout their lifespan. However, computational systems are not, in
general, capable of learning tasks sequentially. This long-standing challenge
for deep neural networks (DNNs) is called catastrophic forgetting. Multiple
solutions have been proposed to overcome this limitation. This paper makes an
in-depth evaluation of the memory replay methods, exploring the efficiency,
performance, and scalability of various sampling strategies when selecting
replay data. All experiments are conducted on multiple datasets under various
domains. Finally, a practical solution for selecting replay methods for various
data distributions is provided.
Related papers
- Watch Your Step: Optimal Retrieval for Continual Learning at Scale [1.7265013728931]
In continual learning, a model learns incrementally over time while minimizing interference between old and new tasks.
One of the most widely used approaches in continual learning is referred to as replay.
We propose a framework for evaluating selective retrieval strategies, categorized by simple, independent class- and sample-selective primitives.
We propose a set of strategies to prevent duplicate replays and explore whether new samples with low loss values can be learned without replay.
arXiv Detail & Related papers (2024-04-16T17:35:35Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Dealing with Cross-Task Class Discrimination in Online Continual
Learning [54.31411109376545]
This paper argues for another challenge in class-incremental learning (CIL)
How to establish decision boundaries between the classes of the new task and old tasks with no (or limited) access to the old task data.
A replay method saves a small amount of data (replay data) from previous tasks. When a batch of current task data arrives, the system jointly trains the new data and some sampled replay data.
This paper argues that the replay approach also has a dynamic training bias issue which reduces the effectiveness of the replay data in solving the CTCD problem.
arXiv Detail & Related papers (2023-05-24T02:52:30Z) - Accelerating exploration and representation learning with offline
pre-training [52.6912479800592]
We show that exploration and representation learning can be improved by separately learning two different models from a single offline dataset.
We show that learning a state representation using noise-contrastive estimation and a model of auxiliary reward can significantly improve the sample efficiency on the challenging NetHack benchmark.
arXiv Detail & Related papers (2023-03-31T18:03:30Z) - Practical Recommendations for Replay-based Continual Learning Methods [18.559132470835937]
Continual Learning requires the model to learn from a stream of dynamic, non-stationary data without forgetting previous knowledge.
Replay approaches have empirically proved to be the most effective ones.
arXiv Detail & Related papers (2022-03-19T12:44:44Z) - Learning Bayesian Sparse Networks with Full Experience Replay for
Continual Learning [54.7584721943286]
Continual Learning (CL) methods aim to enable machine learning models to learn new tasks without catastrophic forgetting of those that have been previously mastered.
Existing CL approaches often keep a buffer of previously-seen samples, perform knowledge distillation, or use regularization techniques towards this goal.
We propose to only activate and select sparse neurons for learning current and past tasks at any stage.
arXiv Detail & Related papers (2022-02-21T13:25:03Z) - vCLIMB: A Novel Video Class Incremental Learning Benchmark [53.90485760679411]
We introduce vCLIMB, a novel video continual learning benchmark.
vCLIMB is a standardized test-bed to analyze catastrophic forgetting of deep models in video continual learning.
We propose a temporal consistency regularization that can be applied on top of memory-based continual learning methods.
arXiv Detail & Related papers (2022-01-23T22:14:17Z) - An Investigation of Replay-based Approaches for Continual Learning [79.0660895390689]
Continual learning (CL) is a major challenge of machine learning (ML) and describes the ability to learn several tasks sequentially without catastrophic forgetting (CF)
Several solution classes have been proposed, of which so-called replay-based approaches seem very promising due to their simplicity and robustness.
We empirically investigate replay-based approaches of continual learning and assess their potential for applications.
arXiv Detail & Related papers (2021-08-15T15:05:02Z) - Generative Feature Replay with Orthogonal Weight Modification for
Continual Learning [20.8966035274874]
generative replay is a promising strategy which generates and replays pseudo data for previous tasks to alleviate catastrophic forgetting.
We propose to replay penultimate layer feature with a generative model; 2) leverage a self-supervised auxiliary task to further enhance the stability of feature.
Empirical results on several datasets show our method always achieves substantial improvement over powerful OWM.
arXiv Detail & Related papers (2020-05-07T13:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.