Divide and Conquer: Static-Dynamic Collaboration for Few-Shot Class-Incremental Learning
- URL: http://arxiv.org/abs/2601.08448v1
- Date: Tue, 13 Jan 2026 11:18:43 GMT
- Title: Divide and Conquer: Static-Dynamic Collaboration for Few-Shot Class-Incremental Learning
- Authors: Kexin Bao, Daichi Zhang, Yong Li, Dan Zeng, Shiming Ge,
- Abstract summary: Few-shot class-incremental learning aims to continuously recognize novel classes under limited data.<n>We propose a framework termed Static-Dynamic Collaboration to achieve a better trade-off between stability and plasticity.<n>By employing both stages, our method achieves improved retention of old knowledge while continuously adapting to new classes.
- Score: 29.237877675710877
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Few-shot class-incremental learning (FSCIL) aims to continuously recognize novel classes under limited data, which suffers from the key stability-plasticity dilemma: balancing the retention of old knowledge with the acquisition of new knowledge. To address this issue, we divide the task into two different stages and propose a framework termed Static-Dynamic Collaboration (SDC) to achieve a better trade-off between stability and plasticity. Specifically, our method divides the normal pipeline of FSCIL into Static Retaining Stage (SRS) and Dynamic Learning Stage (DLS), which harnesses old static and incremental dynamic class information, respectively. During SRS, we train an initial model with sufficient data in the base session and preserve the key part as static memory to retain fundamental old knowledge. During DLS, we introduce an extra dynamic projector jointly trained with the previous static memory. By employing both stages, our method achieves improved retention of old knowledge while continuously adapting to new classes. Extensive experiments on three public benchmarks and a real-world application dataset demonstrate that our method achieves state-of-the-art performance against other competitors.
Related papers
- Modular Memory is the Key to Continual Learning Agents [100.09688599754465]
We argue that combining the strengths of In-Weight Learning (IWL) and the newly emerged capabilities of In-Context Learning (ICL) through the design of modular memory is the missing piece for continual adaptation at scale.<n>We outline a conceptual framework for modular memory-centric architectures that leverage ICL for rapid adaptation and knowledge accumulation, and IWL for stable updates to model capabilities.
arXiv Detail & Related papers (2026-03-02T11:40:05Z) - Multi-Level Knowledge Distillation and Dynamic Self-Supervised Learning for Continual Learning [29.379585617313552]
Class-incremental with repetition (CIR) is a more realistic scenario than the traditional class incremental setup.<n>We propose two components that efficiently use the unlabeled data to ensure the high stability and the plasticity of models trained in CIR setup.<n>Both of our proposed components significantly improve the performance in CIR setup, achieving 2nd place in the CVPR 5th CLVISION Challenge.
arXiv Detail & Related papers (2025-08-18T07:50:20Z) - Multi-level Collaborative Distillation Meets Global Workspace Model: A Unified Framework for OCIL [38.72433556055473]
Online Class-Incremental Learning (OCIL) enables models to learn continuously from non-i.i.d. data streams.<n>OCIL faces two key challenges: maintaining model stability under strict memory constraints and ensuring adaptability to new tasks.<n>We propose a novel approach that enhances ensemble learning through a Global Workspace Model (GWM)
arXiv Detail & Related papers (2025-08-12T06:52:33Z) - SplitLoRA: Balancing Stability and Plasticity in Continual Learning Through Gradient Space Splitting [68.00007494819798]
Continual learning requires a model to learn multiple tasks in sequence while maintaining both stability:preserving knowledge from previously learned tasks, and plasticity:effectively learning new tasks.<n> Gradient projection has emerged as an effective and popular paradigm in CL, where it partitions the gradient space of previously learned tasks into two subspaces.<n>New tasks are learned effectively within the minor subspace, thereby reducing interference with previously acquired knowledge.<n>Existing Gradient Projection methods struggle to achieve an optimal balance between plasticity and stability, as it is hard to appropriately partition the gradient space.
arXiv Detail & Related papers (2025-05-28T13:57:56Z) - Reinforced Interactive Continual Learning via Real-time Noisy Human Feedback [59.768119380109084]
This paper introduces an interactive continual learning paradigm where AI models dynamically learn new skills from real-time human feedback.<n>We propose RiCL, a Reinforced interactive Continual Learning framework leveraging Large Language Models (LLMs)<n>Our RiCL approach substantially outperforms existing combinations of state-of-the-art online continual learning and noisy-label learning methods.
arXiv Detail & Related papers (2025-05-15T03:22:03Z) - Inclusive Training Separation and Implicit Knowledge Interaction for Balanced Online Class-Incremental Learning [6.2792786251529575]
Online class-incremental learning (OCIL) focuses on gradually learning new classes (called plasticity) from a stream of data in a single-pass.<n>The primary challenge in OCIL lies in maintaining a good balance between the knowledge of old and new classes.<n>We propose a novel replay-based method, called Balanced Online Incremental Learning (BOIL), which can achieve both high plasticity and stability.
arXiv Detail & Related papers (2025-04-29T09:13:00Z) - DESIRE: Dynamic Knowledge Consolidation for Rehearsal-Free Continual Learning [23.878495627964146]
Continual learning aims to equip models with the ability to retain previously learned knowledge like a human.<n>Existing methods usually overlook the issue of information leakage caused by the fact that the experiment data have been used in pre-trained models.<n>In this paper, we propose a new LoRA-based rehearsal-free method named DESIRE.
arXiv Detail & Related papers (2024-11-28T13:54:01Z) - Continual Task Learning through Adaptive Policy Self-Composition [54.95680427960524]
CompoFormer is a structure-based continual transformer model that adaptively composes previous policies via a meta-policy network.
Our experiments reveal that CompoFormer outperforms conventional continual learning (CL) methods, particularly in longer task sequences.
arXiv Detail & Related papers (2024-11-18T08:20:21Z) - Mamba-FSCIL: Dynamic Adaptation with Selective State Space Model for Few-Shot Class-Incremental Learning [115.79349923044663]
Few-shot class-incremental learning (FSCIL) aims to incrementally learn novel classes from limited examples.<n>Existing methods face a critical dilemma: static architectures rely on a fixed parameter space to learn from data that arrive sequentially, prone to overfitting to the current session.<n>In this study, we explore the potential of Selective State Space Models (SSMs) for FSCIL.
arXiv Detail & Related papers (2024-07-08T17:09:39Z) - Branch-Tuning: Balancing Stability and Plasticity for Continual Self-Supervised Learning [33.560003528712414]
Self-supervised learning (SSL) has emerged as an effective paradigm for deriving general representations from vast amounts of unlabeled data.
This poses a challenge in striking a balance between stability and plasticity when adapting to new information.
We propose Branch-tuning, an efficient and straightforward method that achieves a balance between stability and plasticity in continual SSL.
arXiv Detail & Related papers (2024-03-27T05:38:48Z) - Static-Dynamic Co-Teaching for Class-Incremental 3D Object Detection [71.18882803642526]
Deep learning approaches have shown remarkable performance in the 3D object detection task.
They suffer from a catastrophic performance drop when incrementally learning new classes without revisiting the old data.
This "catastrophic forgetting" phenomenon impedes the deployment of 3D object detection approaches in real-world scenarios.
We present the first solution - SDCoT, a novel static-dynamic co-teaching method.
arXiv Detail & Related papers (2021-12-14T09:03:41Z) - Learning to Continuously Optimize Wireless Resource in a Dynamic
Environment: A Bilevel Optimization Perspective [52.497514255040514]
This work develops a new approach that enables data-driven methods to continuously learn and optimize resource allocation strategies in a dynamic environment.
We propose to build the notion of continual learning into wireless system design, so that the learning model can incrementally adapt to the new episodes.
Our design is based on a novel bilevel optimization formulation which ensures certain fairness" across different data samples.
arXiv Detail & Related papers (2021-05-03T07:23:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.