Improving Replay Sample Selection and Storage for Less Forgetting in
Continual Learning
- URL: http://arxiv.org/abs/2308.01895v1
- Date: Thu, 3 Aug 2023 17:46:27 GMT
- Title: Improving Replay Sample Selection and Storage for Less Forgetting in
Continual Learning
- Authors: Daniel Brignac, Niels Lobo, Abhijit Mahalanobis
- Abstract summary: Continual learning seeks to enable deep learners to train on a series of tasks of unknown length without suffering from the catastrophic forgetting of previous tasks.
One effective solution is replay, which involves storing few previous experiences in memory and replaying them when learning the current task.
This study aims to address these issues with a novel comparison of the commonly used reservoir sampling to various alternative population strategies.
- Score: 1.2891210250935146
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Continual learning seeks to enable deep learners to train on a series of
tasks of unknown length without suffering from the catastrophic forgetting of
previous tasks. One effective solution is replay, which involves storing few
previous experiences in memory and replaying them when learning the current
task. However, there is still room for improvement when it comes to selecting
the most informative samples for storage and determining the optimal number of
samples to be stored. This study aims to address these issues with a novel
comparison of the commonly used reservoir sampling to various alternative
population strategies and providing a novel detailed analysis of how to find
the optimal number of stored samples.
Related papers
- Diversified Batch Selection for Training Acceleration [68.67164304377732]
A prevalent research line, known as online batch selection, explores selecting informative subsets during the training process.
vanilla reference-model-free methods involve independently scoring and selecting data in a sample-wise manner.
We propose Diversified Batch Selection (DivBS), which is reference-model-free and can efficiently select diverse and representative samples.
arXiv Detail & Related papers (2024-06-07T12:12:20Z) - Watch Your Step: Optimal Retrieval for Continual Learning at Scale [1.7265013728931]
In continual learning, a model learns incrementally over time while minimizing interference between old and new tasks.
One of the most widely used approaches in continual learning is referred to as replay.
We propose a framework for evaluating selective retrieval strategies, categorized by simple, independent class- and sample-selective primitives.
We propose a set of strategies to prevent duplicate replays and explore whether new samples with low loss values can be learned without replay.
arXiv Detail & Related papers (2024-04-16T17:35:35Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Continual Learning with Strong Experience Replay [32.154995019080594]
We propose a CL method with Strong Experience Replay (SER)
SER utilizes future experiences mimicked on the current training data, besides distilling past experience from the memory buffer.
Experimental results on multiple image classification datasets show that our SER method surpasses the state-of-the-art methods by a noticeable margin.
arXiv Detail & Related papers (2023-05-23T02:42:54Z) - How Relevant is Selective Memory Population in Lifelong Language
Learning? [15.9310767099639]
State-of-the-art approaches rely on sparse experience replay as the primary approach to prevent forgetting.
We investigate how relevant the selective memory population is in the lifelong learning process of text classification and question-answering tasks.
arXiv Detail & Related papers (2022-10-03T13:52:54Z) - A Benchmark and Empirical Analysis for Replay Strategies in Continual
Learning [2.922007656878633]
Computational systems are not, in general, capable of learning tasks sequentially.
This paper makes an in-depth evaluation of the memory replay methods.
All experiments are conducted on multiple datasets under various domains.
arXiv Detail & Related papers (2022-08-04T13:48:11Z) - Sample Condensation in Online Continual Learning [13.041782266237]
Online Continual learning is a challenging learning scenario where the model must learn from a non-stationary stream of data.
We propose OLCGM, a novel replay-based continual learning strategy that uses knowledge condensation techniques to continuously compress the memory.
arXiv Detail & Related papers (2022-06-23T17:23:42Z) - ALLSH: Active Learning Guided by Local Sensitivity and Hardness [98.61023158378407]
We propose to retrieve unlabeled samples with a local sensitivity and hardness-aware acquisition function.
Our method achieves consistent gains over the commonly used active learning strategies in various classification tasks.
arXiv Detail & Related papers (2022-05-10T15:39:11Z) - Memory Replay with Data Compression for Continual Learning [80.95444077825852]
We propose memory replay with data compression to reduce the storage cost of old training samples.
We extensively validate this across several benchmarks of class-incremental learning and in a realistic scenario of object detection for autonomous driving.
arXiv Detail & Related papers (2022-02-14T10:26:23Z) - An Investigation of Replay-based Approaches for Continual Learning [79.0660895390689]
Continual learning (CL) is a major challenge of machine learning (ML) and describes the ability to learn several tasks sequentially without catastrophic forgetting (CF)
Several solution classes have been proposed, of which so-called replay-based approaches seem very promising due to their simplicity and robustness.
We empirically investigate replay-based approaches of continual learning and assess their potential for applications.
arXiv Detail & Related papers (2021-08-15T15:05:02Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.