A Practical Guide to Streaming Continual Learning
- URL: http://arxiv.org/abs/2603.01677v1
- Date: Mon, 02 Mar 2026 10:06:34 GMT
- Title: A Practical Guide to Streaming Continual Learning
- Authors: Andrea Cossu, Federico Giannini, Giacomo Ziffer, Alessio Bernardo, Alexander Gepperth, Emanuele Della Valle, Barbara Hammer, Davide Bacciu,
- Abstract summary: Continual Learning (CL) and Streaming Machine Learning () study the ability of agents to learn from a stream of non-stationary data.<n>Despite sharing some similarities, they address different and complementary challenges.<n>We discuss Streaming Continual Learning (SCL), an emerging paradigm providing a unifying solution to real-world problems.
- Score: 53.995807801604506
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Continual Learning (CL) and Streaming Machine Learning (SML) study the ability of agents to learn from a stream of non-stationary data. Despite sharing some similarities, they address different and complementary challenges. While SML focuses on rapid adaptation after changes (concept drifts), CL aims to retain past knowledge when learning new tasks. After a brief introduction to CL and SML, we discuss Streaming Continual Learning (SCL), an emerging paradigm providing a unifying solution to real-world problems, which may require both SML and CL abilities. We claim that SCL can i) connect the CL and SML communities, motivating their work towards the same goal, and ii) foster the design of hybrid approaches that can quickly adapt to new information (as in SML) without forgetting previous knowledge (as in CL). We conclude the paper with a motivating example and a set of experiments, highlighting the need for SCL by showing how CL and SML alone struggle in achieving rapid adaptation and knowledge retention.
Related papers
- Streaming Continual Learning for Unified Adaptive Intelligence in Dynamic Environments [5.713812353511933]
Continual Learning (CL) and Streaming Machine Learning () are two research areas that tackle this arduous task.<n>We put forward a unified setting that harnesses the benefits of both CL and Streaming Continual Learning.
arXiv Detail & Related papers (2026-03-02T10:24:37Z) - In-context Continual Learning Assisted by an External Continual Learner [19.382196203113836]
Existing continual learning (CL) methods rely on fine-tuning or adapting large language models (LLMs)<n>We introduce InCA, a novel approach that integrates an external continual learner (ECL) with ICL to enable scalable CL without CF.
arXiv Detail & Related papers (2024-12-20T04:44:41Z) - Adapter-Enhanced Semantic Prompting for Continual Learning [91.63494614012362]
Continual learning (CL) enables models to adapt to evolving data streams.<n>Traditional methods usually retain the past data for replay or add additional branches in the model to learn new knowledge.<n>We propose a novel lightweight CL framework, which integrates prompt tuning and adapter techniques.
arXiv Detail & Related papers (2024-12-15T06:14:55Z) - CLEO: Continual Learning of Evolving Ontologies [12.18795037817058]
Continual learning (CL) aims to instill the lifelong learning of humans in intelligent systems.
General learning processes are not just limited to learning information, but also refinement of existing information.
CLEO is motivated by the need for intelligent systems to adapt to real-world changes over time.
arXiv Detail & Related papers (2024-07-11T11:32:33Z) - Link-Context Learning for Multimodal LLMs [40.923816691928536]
Link-context learning (LCL) emphasizes "reasoning from cause and effect" to augment the learning capabilities of MLLMs.
LCL guides the model to discern not only the analogy but also the underlying causal associations between data points.
To facilitate the evaluation of this novel approach, we introduce the ISEKAI dataset.
arXiv Detail & Related papers (2023-08-15T17:33:24Z) - POP: Prompt Of Prompts for Continual Learning [59.15888651733645]
Continual learning (CL) aims to mimic the human ability to learn new concepts without catastrophic forgetting.
We show that a foundation model equipped with POP learning is able to outperform classic CL methods by a significant margin.
arXiv Detail & Related papers (2023-06-14T02:09:26Z) - A Survey on In-context Learning [77.78614055956365]
In-context learning (ICL) has emerged as a new paradigm for natural language processing (NLP)
We first present a formal definition of ICL and clarify its correlation to related studies.
We then organize and discuss advanced techniques, including training strategies, prompt designing strategies, and related analysis.
arXiv Detail & Related papers (2022-12-31T15:57:09Z) - Beyond Supervised Continual Learning: a Review [69.9674326582747]
Continual Learning (CL) is a flavor of machine learning where the usual assumption of stationary data distribution is relaxed or omitted.
Changes in the data distribution can cause the so-called catastrophic forgetting (CF) effect: an abrupt loss of previous knowledge.
This article reviews literature that study CL in other settings, such as learning with reduced supervision, fully unsupervised learning, and reinforcement learning.
arXiv Detail & Related papers (2022-08-30T14:44:41Z) - A Survey on Curriculum Learning [48.36129047271622]
Curriculum learning (CL) is a training strategy that trains a machine learning model from easier data to harder data.
As an easy-to-use plug-in, the CL strategy has demonstrated its power in improving the generalization capacity and convergence rate of various models.
arXiv Detail & Related papers (2020-10-25T17:15:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.