Rehearsal revealed: The limits and merits of revisiting samples in
continual learning
- URL: http://arxiv.org/abs/2104.07446v1
- Date: Thu, 15 Apr 2021 13:28:14 GMT
- Title: Rehearsal revealed: The limits and merits of revisiting samples in
continual learning
- Authors: Eli Verwimp, Matthias De Lange, Tinne Tuytelaars
- Abstract summary: We provide insight into the limits and merits of rehearsal, one of continual learning's most established methods.
We show that models trained sequentially with rehearsal tend to stay in the same low-loss region after a task has finished, but are at risk of overfitting on its sample memory.
- Score: 43.40531878205344
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning from non-stationary data streams and overcoming catastrophic
forgetting still poses a serious challenge for machine learning research.
Rather than aiming to improve state-of-the-art, in this work we provide insight
into the limits and merits of rehearsal, one of continual learning's most
established methods. We hypothesize that models trained sequentially with
rehearsal tend to stay in the same low-loss region after a task has finished,
but are at risk of overfitting on its sample memory, hence harming
generalization. We provide both conceptual and strong empirical evidence on
three benchmarks for both behaviors, bringing novel insights into the dynamics
of rehearsal and continual learning in general. Finally, we interpret important
continual learning works in the light of our findings, allowing for a deeper
understanding of their successes.
Related papers
- Forgetting Order of Continual Learning: Examples That are Learned First are Forgotten Last [44.31831689984837]
Catastrophic forgetting poses a significant challenge in continual learning.
examples learned early are rarely forgotten, while those learned later are more susceptible to forgetting.
We introduce Goldilocks, a novel replay buffer sampling method that filters out examples learned too quickly or too slowly, keeping those learned at an intermediate speed.
arXiv Detail & Related papers (2024-06-14T11:31:12Z) - BrainWash: A Poisoning Attack to Forget in Continual Learning [22.512552596310176]
"BrainWash" is a novel data poisoning method tailored to impose forgetting on a continual learner.
An important feature of our approach is that the attacker requires no access to previous tasks' data.
Our experiments highlight the efficacy of BrainWash, showcasing degradation in performance across various regularization-based continual learning methods.
arXiv Detail & Related papers (2023-11-20T18:26:01Z) - Repetition In Repetition Out: Towards Understanding Neural Text
Degeneration from the Data Perspective [91.14291142262262]
This work presents a straightforward and fundamental explanation from the data perspective.
Our preliminary investigation reveals a strong correlation between the degeneration issue and the presence of repetitions in training data.
Our experiments reveal that penalizing the repetitions in training data remains critical even when considering larger model sizes and instruction tuning.
arXiv Detail & Related papers (2023-10-16T09:35:42Z) - Imitating, Fast and Slow: Robust learning from demonstrations via
decision-time planning [96.72185761508668]
Planning at Test-time (IMPLANT) is a new meta-algorithm for imitation learning.
We demonstrate that IMPLANT significantly outperforms benchmark imitation learning approaches on standard control environments.
arXiv Detail & Related papers (2022-04-07T17:16:52Z) - Class-Incremental Continual Learning into the eXtended DER-verse [17.90483695137098]
This work aims at assessing and overcoming the pitfalls of our previous proposal Dark Experience Replay (DER)
Inspired by the way our minds constantly rewrite past recollections and set expectations for the future, we endow our model with the abilities to i) revise its replay memory to welcome novel information regarding past data.
We show that the application of these strategies leads to remarkable improvements.
arXiv Detail & Related papers (2022-01-03T17:14:30Z) - Relational Experience Replay: Continual Learning by Adaptively Tuning
Task-wise Relationship [54.73817402934303]
We propose Experience Continual Replay (ERR), a bi-level learning framework to adaptively tune task-wise to achieve a better stability plasticity' tradeoff.
ERR can consistently improve the performance of all baselines and surpass current state-of-the-art methods.
arXiv Detail & Related papers (2021-12-31T12:05:22Z) - Co$^2$L: Contrastive Continual Learning [69.46643497220586]
Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks.
We propose a rehearsal-based continual learning algorithm that focuses on continually learning and maintaining transferable representations.
arXiv Detail & Related papers (2021-06-28T06:14:38Z) - Learning Invariant Representation for Continual Learning [5.979373021392084]
A key challenge in Continual learning is catastrophically forgetting previously learned tasks when the agent faces a new one.
We propose a new pseudo-rehearsal-based method, named learning Invariant Representation for Continual Learning (IRCL)
Disentangling the shared invariant representation helps to learn continually a sequence of tasks, while being more robust to forgetting and having better knowledge transfer.
arXiv Detail & Related papers (2021-01-15T15:12:51Z) - Bilevel Continual Learning [76.50127663309604]
We present a novel framework of continual learning named "Bilevel Continual Learning" (BCL)
Our experiments on continual learning benchmarks demonstrate the efficacy of the proposed BCL compared to many state-of-the-art methods.
arXiv Detail & Related papers (2020-07-30T16:00:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.