Streaming Continual Learning for Unified Adaptive Intelligence in Dynamic Environments
- URL: http://arxiv.org/abs/2603.01695v1
- Date: Mon, 02 Mar 2026 10:24:37 GMT
- Title: Streaming Continual Learning for Unified Adaptive Intelligence in Dynamic Environments
- Authors: Federico Giannini, Giacomo Ziffer, Andrea Cossu, Vincenzo Lomonaco,
- Abstract summary: Continual Learning (CL) and Streaming Machine Learning () are two research areas that tackle this arduous task.<n>We put forward a unified setting that harnesses the benefits of both CL and Streaming Continual Learning.
- Score: 5.713812353511933
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Developing effective predictive models becomes challenging in dynamic environments that continuously produce data and constantly change. Continual Learning (CL) and Streaming Machine Learning (SML) are two research areas that tackle this arduous task. We put forward a unified setting that harnesses the benefits of both CL and SML: their ability to quickly adapt to non-stationary data streams without forgetting previous knowledge. We refer to this setting as Streaming Continual Learning (SCL). SCL does not replace either CL or SML. Instead, it extends the techniques and approaches considered by both fields. We start by briefly describing CL and SML and unifying the languages of the two frameworks. We then present the key features of SCL. We finally highlight the importance of bridging the two communities to advance the field of intelligent systems.
Related papers
- A Practical Guide to Streaming Continual Learning [53.995807801604506]
Continual Learning (CL) and Streaming Machine Learning () study the ability of agents to learn from a stream of non-stationary data.<n>Despite sharing some similarities, they address different and complementary challenges.<n>We discuss Streaming Continual Learning (SCL), an emerging paradigm providing a unifying solution to real-world problems.
arXiv Detail & Related papers (2026-03-02T10:06:34Z) - LibContinual: A Comprehensive Library towards Realistic Continual Learning [62.34449396069085]
A fundamental challenge in Continual Learning (CL) is catastrophic forgetting, where adapting to new tasks degrades the performance on previous ones.<n>We propose LibContinual, a comprehensive and reproducible library designed to serve as a foundational platform for realistic CL.
arXiv Detail & Related papers (2025-12-26T13:59:13Z) - Bridging Streaming Continual Learning via In-Context Large Tabular Models [37.26465083968656]
We argue that large in-context models (LTMs) provide a natural bridge for Streaming Continual Learning (SCL)<n>In our view, streams should be summarized on-the-fly into compact sketches that can be consumed by LTMs.<n>We show how the SL and CL communities implicitly adopt a divide-to-conquer strategy to manage the tension between plasticity and stability.
arXiv Detail & Related papers (2025-12-12T15:47:26Z) - Unlocking In-Context Learning for Natural Datasets Beyond Language Modelling [33.66383220833958]
Large Language Models (LLMs) exhibit In-Context Learning (ICL)<n>ICL enables the model to perform new tasks conditioning only on the examples provided in the context without updating the model's weights.
arXiv Detail & Related papers (2025-01-09T09:45:05Z) - In-context Continual Learning Assisted by an External Continual Learner [19.382196203113836]
Existing continual learning (CL) methods rely on fine-tuning or adapting large language models (LLMs)<n>We introduce InCA, a novel approach that integrates an external continual learner (ECL) with ICL to enable scalable CL without CF.
arXiv Detail & Related papers (2024-12-20T04:44:41Z) - Adapter-Enhanced Semantic Prompting for Continual Learning [91.63494614012362]
Continual learning (CL) enables models to adapt to evolving data streams.<n>Traditional methods usually retain the past data for replay or add additional branches in the model to learn new knowledge.<n>We propose a novel lightweight CL framework, which integrates prompt tuning and adapter techniques.
arXiv Detail & Related papers (2024-12-15T06:14:55Z) - Investigating the Pre-Training Dynamics of In-Context Learning: Task Recognition vs. Task Learning [99.05401042153214]
In-context learning (ICL) is potentially attributed to two major abilities: task recognition (TR) and task learning (TL)
We take the first step by examining the pre-training dynamics of the emergence of ICL.
We propose a simple yet effective method to better integrate these two abilities for ICL at inference time.
arXiv Detail & Related papers (2024-06-20T06:37:47Z) - CoLeCLIP: Open-Domain Continual Learning via Joint Task Prompt and Vocabulary Learning [38.063942750061585]
We introduce a novel approach, CoLeCLIP, that learns an open-domain CL model based on CLIP.
CoLeCLIP outperforms state-of-the-art methods for open-domain CL under both task- and class-incremental learning settings.
arXiv Detail & Related papers (2024-03-15T12:28:21Z) - SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language Models [71.78800549517298]
Continual learning (CL) ability is vital for deploying large language models (LLMs) in the dynamic world.
Existing methods devise the learning module to acquire task-specific knowledge with parameter-efficient tuning (PET) block and the selection module to pick out the corresponding one for the testing input.
We propose a novel Shared Attention Framework (SAPT) to align the PET learning and selection via the Shared Attentive Learning & Selection module.
arXiv Detail & Related papers (2024-01-16T11:45:03Z) - Label Words are Anchors: An Information Flow Perspective for
Understanding In-Context Learning [77.7070536959126]
In-context learning (ICL) emerges as a promising capability of large language models (LLMs)
In this paper, we investigate the working mechanism of ICL through an information flow lens.
We introduce an anchor re-weighting method to improve ICL performance, a demonstration compression technique to expedite inference, and an analysis framework for diagnosing ICL errors in GPT2-XL.
arXiv Detail & Related papers (2023-05-23T15:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.