Continual Learning Should Move Beyond Incremental Classification
- URL: http://arxiv.org/abs/2502.11927v1
- Date: Mon, 17 Feb 2025 15:40:13 GMT
- Title: Continual Learning Should Move Beyond Incremental Classification
- Authors: Rupert Mitchell, Antonio Alliegro, Raffaello Camoriano, Dustin CarriĆ³n-Ojeda, Antonio Carta, Georgia Chalvatzaki, Nikhil Churamani, Carlo D'Eramo, Samin Hamidi, Robin Hesse, Fabian Hinder, Roshni Ramanna Kamath, Vincenzo Lomonaco, Subarnaduti Paul, Francesca Pistilli, Tinne Tuytelaars, Gido M van de Ven, Kristian Kersting, Simone Schaub-Meyer, Martin Mundt,
- Abstract summary: Continual learning (CL) is the sub-field of machine learning concerned with accumulating knowledge in dynamic environments.
Here, we argue that maintaining such a focus limits both theoretical development and practical applicability of CL methods.
We identify three fundamental challenges: (C1) the nature of continuity in learning problems, (C2) the choice of appropriate spaces and metrics for measuring similarity, and (C3) the role of learning objectives beyond classification.
- Score: 51.23416308775444
- License:
- Abstract: Continual learning (CL) is the sub-field of machine learning concerned with accumulating knowledge in dynamic environments. So far, CL research has mainly focused on incremental classification tasks, where models learn to classify new categories while retaining knowledge of previously learned ones. Here, we argue that maintaining such a focus limits both theoretical development and practical applicability of CL methods. Through a detailed analysis of concrete examples - including multi-target classification, robotics with constrained output spaces, learning in continuous task domains, and higher-level concept memorization - we demonstrate how current CL approaches often fail when applied beyond standard classification. We identify three fundamental challenges: (C1) the nature of continuity in learning problems, (C2) the choice of appropriate spaces and metrics for measuring similarity, and (C3) the role of learning objectives beyond classification. For each challenge, we provide specific recommendations to help move the field forward, including formalizing temporal dynamics through distribution processes, developing principled approaches for continuous task spaces, and incorporating density estimation and generative objectives. In so doing, this position paper aims to broaden the scope of CL research while strengthening its theoretical foundations, making it more applicable to real-world problems.
Related papers
- Continual Task Learning through Adaptive Policy Self-Composition [54.95680427960524]
CompoFormer is a structure-based continual transformer model that adaptively composes previous policies via a meta-policy network.
Our experiments reveal that CompoFormer outperforms conventional continual learning (CL) methods, particularly in longer task sequences.
arXiv Detail & Related papers (2024-11-18T08:20:21Z) - Solving Continual Offline RL through Selective Weights Activation on Aligned Spaces [52.649077293256795]
Continual offline reinforcement learning (CORL) has shown impressive ability in diffusion-based lifelong learning systems.
We propose Vector-Quantized Continual diffuser, named VQ-CD, to break the barrier of different spaces between various tasks.
arXiv Detail & Related papers (2024-10-21T07:13:45Z) - CLoG: Benchmarking Continual Learning of Image Generation Models [29.337710309698515]
This paper advocates for shifting the research focus from classification-based CL to CLoG.
We adapt three types of existing CL methodologies, replay-based, regularization-based, and parameter-isolation-based methods to generative tasks.
Our benchmarks and results yield intriguing insights that can be valuable for developing future CLoG methods.
arXiv Detail & Related papers (2024-06-07T02:12:29Z) - Recent Advances of Foundation Language Models-based Continual Learning: A Survey [31.171203978742447]
Foundation language models (LMs) have marked significant achievements in the domains of natural language processing (NLP) and computer vision (CV)
However, they can not emulate human-like continuous learning due to catastrophic forgetting.
Various continual learning (CL)-based methodologies have been developed to refine LMs, enabling them to adapt to new tasks without forgetting previous knowledge.
arXiv Detail & Related papers (2024-05-28T23:32:46Z) - Scalable Language Model with Generalized Continual Learning [58.700439919096155]
The Joint Adaptive Re-ization (JARe) is integrated with Dynamic Task-related Knowledge Retrieval (DTKR) to enable adaptive adjustment of language models based on specific downstream tasks.
Our method demonstrates state-of-the-art performance on diverse backbones and benchmarks, achieving effective continual learning in both full-set and few-shot scenarios with minimal forgetting.
arXiv Detail & Related papers (2024-04-11T04:22:15Z) - A Unified and General Framework for Continual Learning [58.72671755989431]
Continual Learning (CL) focuses on learning from dynamic and changing data distributions while retaining previously acquired knowledge.
Various methods have been developed to address the challenge of catastrophic forgetting, including regularization-based, Bayesian-based, and memory-replay-based techniques.
This research aims to bridge this gap by introducing a comprehensive and overarching framework that encompasses and reconciles these existing methodologies.
arXiv Detail & Related papers (2024-03-20T02:21:44Z) - Active Continual Learning: On Balancing Knowledge Retention and
Learnability [43.6658577908349]
Acquiring new knowledge without forgetting what has been learned in a sequence of tasks is the central focus of continual learning (CL)
This paper considers the under-explored problem of active continual learning (ACL) for a sequence of active learning (AL) tasks.
We investigate the effectiveness and interplay between several AL and CL algorithms in the domain, class and task-incremental scenarios.
arXiv Detail & Related papers (2023-05-06T04:11:03Z) - From MNIST to ImageNet and Back: Benchmarking Continual Curriculum
Learning [9.104068727716294]
Continual learning (CL) is one of the most promising trends in machine learning research.
We introduce two novel CL benchmarks that involve multiple heterogeneous tasks from six image datasets.
We additionally structure our benchmarks so that tasks are presented in increasing and decreasing order of complexity.
arXiv Detail & Related papers (2023-03-16T18:11:19Z) - Beyond Supervised Continual Learning: a Review [69.9674326582747]
Continual Learning (CL) is a flavor of machine learning where the usual assumption of stationary data distribution is relaxed or omitted.
Changes in the data distribution can cause the so-called catastrophic forgetting (CF) effect: an abrupt loss of previous knowledge.
This article reviews literature that study CL in other settings, such as learning with reduced supervision, fully unsupervised learning, and reinforcement learning.
arXiv Detail & Related papers (2022-08-30T14:44:41Z) - International Workshop on Continual Semi-Supervised Learning:
Introduction, Benchmarks and Baselines [20.852277473776617]
The aim of this paper is to formalize a new continual semi-supervised learning (CSSL) paradigm.
The paper introduces two new benchmarks specifically designed to assess CSSL on two important computer vision tasks.
We describe the Continual Activity Recognition (CAR) and Continual Crowd Counting (CCC) challenges built upon those benchmarks, the baseline models proposed for the challenges, and describe a simple CSSL baseline.
arXiv Detail & Related papers (2021-10-27T17:34:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.