Trust Region Continual Learning as an Implicit Meta-Learner
- URL: http://arxiv.org/abs/2602.02417v1
- Date: Mon, 02 Feb 2026 18:19:16 GMT
- Title: Trust Region Continual Learning as an Implicit Meta-Learner
- Authors: Zekun Wang, Anant Gupta, Christopher J. MacLellan,
- Abstract summary: We study a hybrid perspective: emphtrust region continual learning that combines generative replay with a Fisher-metric trust region constraint.<n>We show that, under local approximations, the resulting update admits a MAML-style interpretation with a single implicit inner step.<n>This yields an emergent meta-learning property in continual learning.
- Score: 3.705371747297478
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Continual learning aims to acquire tasks sequentially without catastrophic forgetting, yet standard strategies face a core tradeoff: regularization-based methods (e.g., EWC) can overconstrain updates when task optima are weakly overlapping, while replay-based methods can retain performance but drift due to imperfect replay. We study a hybrid perspective: \emph{trust region continual learning} that combines generative replay with a Fisher-metric trust region constraint. We show that, under local approximations, the resulting update admits a MAML-style interpretation with a single implicit inner step: replay supplies an old-task gradient signal (query-like), while the Fisher-weighted penalty provides an efficient offline curvature shaping (support-like). This yields an emergent meta-learning property in continual learning: the model becomes an initialization that rapidly \emph{re-converges} to prior task optima after each task transition, without explicitly optimizing a bilevel objective. Empirically, on task-incremental diffusion image generation and continual diffusion-policy control, trust region continual learning achieves the best final performance and retention, and consistently recovers early-task performance faster than EWC, replay, and continual meta-learning baselines.
Related papers
- Efficient Continual Learning in Language Models via Thalamically Routed Cortical Columns [0.16921396880325779]
We introduce TRC$2$ (Thalamically Routed Cortical Columns), a decoder-only backbone that addresses continual learning at the architectural level.<n>The resulting block is sparse and chunk-parallel, enabling efficient training and inference while preserving clean ablations of each subsystem.
arXiv Detail & Related papers (2026-02-25T23:38:16Z) - Forget Less, Retain More: A Lightweight Regularizer for Rehearsal-Based Continual Learning [51.07663354001582]
Deep neural networks suffer from catastrophic forgetting, where performance on previous tasks degrades after training on a new task.<n>We present a novel approach to address this challenge, focusing on the intersection of memory-based methods and regularization approaches.<n>We formulate a regularization strategy, termed Information Maximization (IM) regularizer, for memory-based continual learning methods.
arXiv Detail & Related papers (2025-12-01T15:56:00Z) - Learning with Preserving for Continual Multitask Learning [4.847042727427382]
We introduce Learning with Preserving (LwP), a novel framework that shifts the focus from preserving task outputs to maintaining the shared representation space.<n>LwP not only mitigates catastrophic forgetting but also consistently outperforms state-of-the-art baselines in CMTL tasks.
arXiv Detail & Related papers (2025-11-11T22:23:20Z) - Train with Perturbation, Infer after Merging: A Two-Stage Framework for Continual Learning [57.514786046966265]
We propose textbfPerturb-and-Merge (P&M), a novel continual learning framework that integrates model merging into the CL paradigm to mitigate forgetting.<n>Our proposed approach achieves state-of-the-art performance on several continual learning benchmark datasets.
arXiv Detail & Related papers (2025-05-28T14:14:19Z) - Scalable Strategies for Continual Learning with Replay [0.0]
We show that replay can play a foundational role in continual learning, allowing models to reconcile new information with past knowledge.<n>In practice, however, replay is quite unscalable, doubling the cost of continual learning when applied naively.<n>We introduce consolidation, a phasic approach to replay which leads to up to 55% less replay samples being needed for a given performance target.<n>Then, we propose sequential merging, an offshoot of task arithmetic which is tailored to the continual learning setting and is shown to work well in combination with replay.
arXiv Detail & Related papers (2025-05-18T18:23:50Z) - LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging [80.17238673443127]
LiNeS is a post-training editing technique designed to preserve pre-trained generalization while enhancing fine-tuned task performance.<n>LiNeS demonstrates significant improvements in both single-task and multi-task settings across various benchmarks in vision and natural language processing.
arXiv Detail & Related papers (2024-10-22T16:26:05Z) - SLCA++: Unleash the Power of Sequential Fine-tuning for Continual Learning with Pre-training [68.7896349660824]
We present an in-depth analysis of the progressive overfitting problem from the lens of Seq FT.
Considering that the overly fast representation learning and the biased classification layer constitute this particular problem, we introduce the advanced Slow Learner with Alignment (S++) framework.
Our approach involves a Slow Learner to selectively reduce the learning rate of backbone parameters, and a Alignment to align the disjoint classification layers in a post-hoc fashion.
arXiv Detail & Related papers (2024-08-15T17:50:07Z) - Scrutinize What We Ignore: Reining In Task Representation Shift Of Context-Based Offline Meta Reinforcement Learning [10.792687309720169]
offline meta reinforcement learning (OMRL) has emerged as a promising approach for interaction avoidance and strong generalization performance.<n>Previous context-based approaches rely on the intuition that alternating optimization between the context encoder and the policy can lead to performance improvements.<n>We name this issue task representation shift and theoretically prove that the monotonic performance improvements can be guaranteed with appropriate context encoder updates.
arXiv Detail & Related papers (2024-05-20T13:14:26Z) - Towards Continual Learning Desiderata via HSIC-Bottleneck
Orthogonalization and Equiangular Embedding [55.107555305760954]
We propose a conceptually simple yet effective method that attributes forgetting to layer-wise parameter overwriting and the resulting decision boundary distortion.
Our method achieves competitive accuracy performance, even with absolute superiority of zero exemplar buffer and 1.02x the base model.
arXiv Detail & Related papers (2024-01-17T09:01:29Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.