Do Your Best and Get Enough Rest for Continual Learning
- URL: http://arxiv.org/abs/2503.18371v1
- Date: Mon, 24 Mar 2025 06:08:37 GMT
- Title: Do Your Best and Get Enough Rest for Continual Learning
- Authors: Hankyul Kang, Gregor Seifer, Donghyun Lee, Jongbin Ryu,
- Abstract summary: According to the forgetting curve theory, we can enhance memory retention by learning extensive data and taking adequate rest.<n>We introduce the view-batch model that adjusts the learning schedules to optimize the recall interval between retraining the same samples.<n>We empirically show that these approaches are aligned with the forgetting curve theory, which can enhance long-term memory.
- Score: 8.17916139651372
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: According to the forgetting curve theory, we can enhance memory retention by learning extensive data and taking adequate rest. This means that in order to effectively retain new knowledge, it is essential to learn it thoroughly and ensure sufficient rest so that our brain can memorize without forgetting. The main takeaway from this theory is that learning extensive data at once necessitates sufficient rest before learning the same data again. This aspect of human long-term memory retention can be effectively utilized to address the continual learning of neural networks. Retaining new knowledge for a long period of time without catastrophic forgetting is the critical problem of continual learning. Therefore, based on Ebbinghaus' theory, we introduce the view-batch model that adjusts the learning schedules to optimize the recall interval between retraining the same samples. The proposed view-batch model allows the network to get enough rest to learn extensive knowledge from the same samples with a recall interval of sufficient length. To this end, we specifically present two approaches: 1) a replay method that guarantees the optimal recall interval, and 2) a self-supervised learning that acquires extensive knowledge from a single training sample at a time. We empirically show that these approaches of our method are aligned with the forgetting curve theory, which can enhance long-term memory. In our experiments, we also demonstrate that our method significantly improves many state-of-the-art continual learning methods in various protocols and scenarios. We open-source this project at https://github.com/hankyul2/ViewBatchModel.
Related papers
- Reducing Catastrophic Forgetting in Online Class Incremental Learning Using Self-Distillation [3.8506666685467343]
In continual learning, previous knowledge is forgotten when a model learns new tasks.
In this paper, we tried to solve this problem by acquiring transferable knowledge through self-distillation.
Our proposed method outperformed conventional methods by experiments on CIFAR10, CIFAR100, and MiniimageNet datasets.
arXiv Detail & Related papers (2024-09-17T16:26:33Z) - Continual Learning via Manifold Expansion Replay [36.27348867557826]
Catastrophic forgetting is a major challenge to continual learning.
We propose a novel replay strategy called Replay Manifold Expansion (MaER)
We show that the proposed method significantly improves the accuracy in continual learning setup, outperforming the state of the arts.
arXiv Detail & Related papers (2023-10-12T05:09:27Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - Learning Fast, Learning Slow: A General Continual Learning Method based
on Complementary Learning System [13.041607703862724]
We propose CLS-ER, a novel dual memory experience replay (ER) method.
New knowledge is acquired while aligning the decision boundaries with the semantic memories.
Our approach achieves state-of-the-art performance on standard benchmarks.
arXiv Detail & Related papers (2022-01-29T15:15:23Z) - Relational Experience Replay: Continual Learning by Adaptively Tuning
Task-wise Relationship [54.73817402934303]
We propose Experience Continual Replay (ERR), a bi-level learning framework to adaptively tune task-wise to achieve a better stability plasticity' tradeoff.
ERR can consistently improve the performance of all baselines and surpass current state-of-the-art methods.
arXiv Detail & Related papers (2021-12-31T12:05:22Z) - On the Theory of Reinforcement Learning with Once-per-Episode Feedback [120.5537226120512]
We introduce a theory of reinforcement learning in which the learner receives feedback only once at the end of an episode.
This is arguably more representative of real-world applications than the traditional requirement that the learner receive feedback at every time step.
arXiv Detail & Related papers (2021-05-29T19:48:51Z) - Learning to Continually Learn Rapidly from Few and Noisy Data [19.09933805011466]
Continual learning could be achieved via replay -- by concurrently training externally stored old data while learning a new task.
By employing a meta-learner, which textitlearns a learning rate per parameter per past task, we found that base learners produced strong results when less memory was available.
arXiv Detail & Related papers (2021-03-06T08:29:47Z) - Remembering for the Right Reasons: Explanations Reduce Catastrophic
Forgetting [100.75479161884935]
We propose a novel training paradigm called Remembering for the Right Reasons (RRR)
RRR stores visual model explanations for each example in the buffer and ensures the model has "the right reasons" for its predictions.
We demonstrate how RRR can be easily added to any memory or regularization-based approach and results in reduced forgetting.
arXiv Detail & Related papers (2020-10-04T10:05:27Z) - Bilevel Continual Learning [76.50127663309604]
We present a novel framework of continual learning named "Bilevel Continual Learning" (BCL)
Our experiments on continual learning benchmarks demonstrate the efficacy of the proposed BCL compared to many state-of-the-art methods.
arXiv Detail & Related papers (2020-07-30T16:00:23Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z) - Using Hindsight to Anchor Past Knowledge in Continual Learning [36.271906785418864]
In continual learning, the learner faces a stream of data whose distribution changes over time.
Modern neural networks are known to suffer under this setting, as they quickly forget previously acquired knowledge.
In this work, we call anchoring, where the learner uses bilevel optimization to update its knowledge on the current task, while keeping intact the predictions on past tasks.
arXiv Detail & Related papers (2020-02-19T13:21:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.