Task-Focused Consolidation with Spaced Recall: Making Neural Networks learn like college students
- URL: http://arxiv.org/abs/2507.21109v1
- Date: Thu, 10 Jul 2025 08:35:30 GMT
- Title: Task-Focused Consolidation with Spaced Recall: Making Neural Networks learn like college students
- Authors: Prital Bamnodkar,
- Abstract summary: This paper introduces a novel continual learning approach inspired by human learning strategies like Active Recall, Deliberate Practice and Spaced Repetition.<n>It is called Task Focused Consolidation with Spaced Recall (TFC-SR)<n>TFC-SR enhances the standard experience replay with a mechanism we termed the Active Recall Probe.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Neural Networks often suffer from a critical limitation known as Catastrophic Forgetting, where performance on past tasks degrades after learning new ones. This paper introduces a novel continual learning approach inspired by human learning strategies like Active Recall, Deliberate Practice and Spaced Repetition, named Task Focused Consolidation with Spaced Recall (TFC-SR). TFC-SR enhances the standard experience replay with a mechanism we termed the Active Recall Probe. It is a periodic, task-aware evaluation of the model's memory that stabilizes the representations of past knowledge. We test TFC-SR on the Split MNIST and Split CIFAR-100 benchmarks against leading regularization-based and replay-based baselines. Our results show that TFC-SR performs significantly better than these methods. For instance, on the Split CIFAR-100, it achieves a final accuracy of 13.17% compared to standard replay's 7.40%. We demonstrate that this advantage comes from the stabilizing effect of the probe itself, and not from the difference in replay volume. Additionally, we analyze the trade-off between memory size and performance and show that while TFC-SR performs better in memory-constrained environments, higher replay volume is still more effective when available memory is abundant. We conclude that TFC-SR is a robust and efficient approach, highlighting the importance of integrating active memory retrieval mechanisms into continual learning systems.
Related papers
- Continual Learning for Adaptive AI Systems [0.0]
Cluster-Aware Replay (CAR) is a hybrid continual learning framework that integrates a small, class-balanced replay buffer with a regularization term.<n>CAR better preserves earlier task performance compared to fine-tuning alone.
arXiv Detail & Related papers (2025-10-09T00:44:32Z) - GeRe: Towards Efficient Anti-Forgetting in Continual Learning of LLM via General Samples Replay [21.865853486614466]
We propose General Sample Replay (GeRe), a framework that use usual pretraining texts for efficient anti-forgetting.<n>We are the first to validate that a small, fixed set of pre-collected general replay samples is sufficient to resolve both concerns--retaining general capabilities while promoting overall performance.
arXiv Detail & Related papers (2025-08-06T17:42:22Z) - Online Continual Learning via Spiking Neural Networks with Sleep Enhanced Latent Replay [8.108335297331658]
This paper proposes a novel online continual learning approach termed as SESLR.<n>It incorporates a sleep enhanced latent replay scheme with spiking neural networks (SNNs)<n>Experiments on both conventional (MNIST, CIFAR10) and neuromorphic (NMNIST, CIFAR10-DVS) datasets demonstrate SESLR's effectiveness.
arXiv Detail & Related papers (2025-06-23T12:22:39Z) - Replay to Remember (R2R): An Efficient Uncertainty-driven Unsupervised Continual Learning Framework Using Generative Replay [1.5267291767316298]
Continual Learning entails progressively acquiring knowledge from new data while retaining previously acquired knowledge.<n>We present a novel uncertainty-driven Unsupervised Continual Learning framework using Generative Replay, namely Replay to Remember (R2R)''<n>Our proposed R2R approach improves knowledge retention, achieving a state-of-the-art performance of 98.13%, 73.06%, 93.41%, 95.18%, 59.74%, respectively.
arXiv Detail & Related papers (2025-05-07T20:29:31Z) - Forget Forgetting: Continual Learning in a World of Abundant Memory [55.64184779530581]
Continual learning has traditionally focused on minimizing exemplar memory.<n>This paper challenges this paradigm by investigating a more realistic regime.<n>We find that the core challenge shifts from stability to plasticity, as models become biased toward prior tasks and struggle to learn new ones.
arXiv Detail & Related papers (2025-02-11T05:40:52Z) - TEAL: New Selection Strategy for Small Buffers in Experience Replay Class Incremental Learning [7.627299398469962]
We introduce TEAL, a novel approach to populate the memory with exemplars.<n>We show that TEAL enhances the average accuracy of existing class-incremental methods.
arXiv Detail & Related papers (2024-06-30T12:09:08Z) - EcoTTA: Memory-Efficient Continual Test-time Adaptation via
Self-distilled Regularization [71.70414291057332]
TTA may primarily be conducted on edge devices with limited memory.
Long-term adaptation often leads to catastrophic forgetting and error accumulation.
We present lightweight meta networks that can adapt the frozen original networks to the target domain.
arXiv Detail & Related papers (2023-03-03T13:05:30Z) - Task-Adaptive Saliency Guidance for Exemplar-free Class Incremental Learning [60.501201259732625]
We introduce task-adaptive saliency for EFCIL and propose a new framework, which we call Task-Adaptive Saliency Supervision (TASS)
Our experiments demonstrate that our method can better preserve saliency maps across tasks and achieve state-of-the-art results on the CIFAR-100, Tiny-ImageNet, and ImageNet-Subset EFCIL benchmarks.
arXiv Detail & Related papers (2022-12-16T02:43:52Z) - A simple but strong baseline for online continual learning: Repeated
Augmented Rehearsal [13.075018350152074]
Online continual learning (OCL) aims to train neural networks incrementally from a non-stationary data stream with a single pass through data.
Rehearsal-based methods attempt to approximate the observed input distributions over time with a small memory and revisit them later to avoid forgetting.
We provide theoretical insights on the inherent memory overfitting risk from the viewpoint of biased and dynamic empirical risk minimization.
arXiv Detail & Related papers (2022-09-28T08:43:35Z) - Learning Bayesian Sparse Networks with Full Experience Replay for
Continual Learning [54.7584721943286]
Continual Learning (CL) methods aim to enable machine learning models to learn new tasks without catastrophic forgetting of those that have been previously mastered.
Existing CL approaches often keep a buffer of previously-seen samples, perform knowledge distillation, or use regularization techniques towards this goal.
We propose to only activate and select sparse neurons for learning current and past tasks at any stage.
arXiv Detail & Related papers (2022-02-21T13:25:03Z) - Convergence Results For Q-Learning With Experience Replay [51.11953997546418]
We provide a convergence rate guarantee, and discuss how it compares to the convergence of Q-learning depending on important parameters such as the frequency and number of iterations of replay.
We also provide theoretical evidence showing when we might expect this to strictly improve performance, by introducing and analyzing a simple class of MDPs.
arXiv Detail & Related papers (2021-12-08T10:22:49Z) - Always Be Dreaming: A New Approach for Data-Free Class-Incremental
Learning [73.24988226158497]
We consider the high-impact problem of Data-Free Class-Incremental Learning (DFCIL)
We propose a novel incremental distillation strategy for DFCIL, contributing a modified cross-entropy training and importance-weighted feature distillation.
Our method results in up to a 25.1% increase in final task accuracy (absolute difference) compared to SOTA DFCIL methods for common class-incremental benchmarks.
arXiv Detail & Related papers (2021-06-17T17:56:08Z) - Improving Computational Efficiency in Visual Reinforcement Learning via
Stored Embeddings [89.63764845984076]
We present Stored Embeddings for Efficient Reinforcement Learning (SEER)
SEER is a simple modification of existing off-policy deep reinforcement learning methods.
We show that SEER does not degrade the performance of RLizable agents while significantly saving computation and memory.
arXiv Detail & Related papers (2021-03-04T08:14:10Z) - The Effectiveness of Memory Replay in Large Scale Continual Learning [42.67483945072039]
We study continual learning in the large scale setting where tasks in the input sequence are not limited to classification, and the outputs can be of high dimension.
Existing methods usually replay only the input-output pairs.
We propose to replay the activation of the intermediate layers in addition to the input-output pairs.
arXiv Detail & Related papers (2020-10-06T01:23:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.