Adiabatic replay for continual learning
- URL: http://arxiv.org/abs/2303.13157v1
- Date: Thu, 23 Mar 2023 10:18:06 GMT
- Title: Adiabatic replay for continual learning
- Authors: Alexander Krawczyk and Alexander Gepperth
- Abstract summary: generative replay spends an increasing amount of time just re-learning what is already known.
We propose a replay-based CL strategy that we term adiabatic replay (AR)
We verify experimentally that AR is superior to state-of-the-art deep generative replay using VAEs.
- Score: 138.7878582237908
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventional replay-based approaches to continual learning (CL) require, for
each learning phase with new data, the replay of samples representing all of
the previously learned knowledge in order to avoid catastrophic forgetting.
Since the amount of learned knowledge grows over time in CL problems,
generative replay spends an increasing amount of time just re-learning what is
already known. In this proof-of-concept study, we propose a replay-based CL
strategy that we term adiabatic replay (AR), which derives its efficiency from
the (reasonable) assumption that each new learning phase is adiabatic, i.e.,
represents only a small addition to existing knowledge. Each new learning phase
triggers a sampling process that selectively replays, from the body of existing
knowledge, just such samples that are similar to the new data, in contrast to
replaying all of it. Complete replay is not required since AR represents the
data distribution by GMMs, which are capable of selectively updating their
internal representation only where data statistics have changed. As long as
additions are adiabatic, the amount of to-be-replayed samples need not to
depend on the amount of previously acquired knowledge at all. We verify
experimentally that AR is superior to state-of-the-art deep generative replay
using VAEs.
Related papers
- Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Temporal Difference Learning with Experience Replay [3.5823366350053325]
Temporal-difference (TD) learning is widely regarded as one of the most popular algorithms in reinforcement learning (RL)
We present a simple decomposition of the Markovian noise terms and provide finite-time error bounds for TD-learning with experience replay.
arXiv Detail & Related papers (2023-06-16T10:25:43Z) - Continual Learning with Strong Experience Replay [32.154995019080594]
We propose a CL method with Strong Experience Replay (SER)
SER utilizes future experiences mimicked on the current training data, besides distilling past experience from the memory buffer.
Experimental results on multiple image classification datasets show that our SER method surpasses the state-of-the-art methods by a noticeable margin.
arXiv Detail & Related papers (2023-05-23T02:42:54Z) - Detachedly Learn a Classifier for Class-Incremental Learning [11.865788374587734]
We present an analysis that the failure of vanilla experience replay (ER) comes from unnecessary re-learning of previous tasks and incompetence to distinguish current task from the previous ones.
We propose a novel replay strategy task-aware experience replay.
Experimental results show our method outperforms current state-of-the-art methods.
arXiv Detail & Related papers (2023-02-23T01:35:44Z) - Few-Shot Class-Incremental Learning via Entropy-Regularized Data-Free
Replay [52.251188477192336]
Few-shot class-incremental learning (FSCIL) has been proposed aiming to enable a deep learning system to incrementally learn new classes with limited data.
We show through empirical results that adopting the data replay is surprisingly favorable.
We propose using data-free replay that can synthesize data by a generator without accessing real data.
arXiv Detail & Related papers (2022-07-22T17:30:51Z) - Learning with Recoverable Forgetting [77.56338597012927]
Learning wIth Recoverable Forgetting explicitly handles the task- or sample-specific knowledge removal and recovery.
Specifically, LIRF brings in two innovative schemes, namely knowledge deposit and withdrawal.
We conduct experiments on several datasets, and demonstrate that the proposed LIRF strategy yields encouraging results with gratifying generalization capability.
arXiv Detail & Related papers (2022-07-17T16:42:31Z) - New Insights on Reducing Abrupt Representation Change in Online
Continual Learning [69.05515249097208]
We focus on the change in representations of observed data that arises when previously unobserved classes appear in the incoming data stream.
We show that applying Experience Replay causes the newly added classes' representations to overlap significantly with the previous classes.
We propose a new method which mitigates this issue by shielding the learned representations from drastic adaptation to accommodate new classes.
arXiv Detail & Related papers (2022-03-08T01:37:00Z) - Revisiting Fundamentals of Experience Replay [91.24213515992595]
We present a systematic and extensive analysis of experience replay in Q-learning methods.
We focus on two fundamental properties: the replay capacity and the ratio of learning updates to experience collected.
arXiv Detail & Related papers (2020-07-13T21:22:17Z) - Generative Feature Replay with Orthogonal Weight Modification for
Continual Learning [20.8966035274874]
generative replay is a promising strategy which generates and replays pseudo data for previous tasks to alleviate catastrophic forgetting.
We propose to replay penultimate layer feature with a generative model; 2) leverage a self-supervised auxiliary task to further enhance the stability of feature.
Empirical results on several datasets show our method always achieves substantial improvement over powerful OWM.
arXiv Detail & Related papers (2020-05-07T13:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.