Multi-level Collaborative Distillation Meets Global Workspace Model: A Unified Framework for OCIL
- URL: http://arxiv.org/abs/2508.08677v1
- Date: Tue, 12 Aug 2025 06:52:33 GMT
- Title: Multi-level Collaborative Distillation Meets Global Workspace Model: A Unified Framework for OCIL
- Authors: Shibin Su, Guoqiang Liang, De Cheng, Shizhou Zhang, Lingyan Ran, Yanning Zhang,
- Abstract summary: Online Class-Incremental Learning (OCIL) enables models to learn continuously from non-i.i.d. data streams.<n>OCIL faces two key challenges: maintaining model stability under strict memory constraints and ensuring adaptability to new tasks.<n>We propose a novel approach that enhances ensemble learning through a Global Workspace Model (GWM)
- Score: 38.72433556055473
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online Class-Incremental Learning (OCIL) enables models to learn continuously from non-i.i.d. data streams and samples of the data streams can be seen only once, making it more suitable for real-world scenarios compared to offline learning. However, OCIL faces two key challenges: maintaining model stability under strict memory constraints and ensuring adaptability to new tasks. Under stricter memory constraints, current replay-based methods are less effective. While ensemble methods improve adaptability (plasticity), they often struggle with stability. To overcome these challenges, we propose a novel approach that enhances ensemble learning through a Global Workspace Model (GWM)-a shared, implicit memory that guides the learning of multiple student models. The GWM is formed by fusing the parameters of all students within each training batch, capturing the historical learning trajectory and serving as a dynamic anchor for knowledge consolidation. This fused model is then redistributed periodically to the students to stabilize learning and promote cross-task consistency. In addition, we introduce a multi-level collaborative distillation mechanism. This approach enforces peer-to-peer consistency among students and preserves historical knowledge by aligning each student with the GWM. As a result, student models remain adaptable to new tasks while maintaining previously learned knowledge, striking a better balance between stability and plasticity. Extensive experiments on three standard OCIL benchmarks show that our method delivers significant performance improvement for several OCIL models across various memory budgets.
Related papers
- Bagging-Based Model Merging for Robust General Text Embeddings [73.51674133699196]
General-purpose text embedding models underpin a wide range of NLP and information retrieval applications.<n>We present a systematic study of multi-task training for text embeddings from two perspectives: data scheduling and model merging.<n>We propose Bagging-based rObust mOdel Merging (BOOM), which trains multiple embedding models on sampled subsets and merges them into a single model.
arXiv Detail & Related papers (2026-02-05T15:45:08Z) - Divide and Conquer: Static-Dynamic Collaboration for Few-Shot Class-Incremental Learning [29.237877675710877]
Few-shot class-incremental learning aims to continuously recognize novel classes under limited data.<n>We propose a framework termed Static-Dynamic Collaboration to achieve a better trade-off between stability and plasticity.<n>By employing both stages, our method achieves improved retention of old knowledge while continuously adapting to new classes.
arXiv Detail & Related papers (2026-01-13T11:18:43Z) - Continual Learning in Vision-Language Models via Aligned Model Merging [84.47520899851557]
We present a new perspective based on model merging to maintain stability while still retaining plasticity.<n>To maximize the effectiveness of the merging process, we propose a simple mechanism that promotes learning aligned weights with previous ones.
arXiv Detail & Related papers (2025-05-30T20:52:21Z) - Pareto Continual Learning: Preference-Conditioned Learning and Adaption for Dynamic Stability-Plasticity Trade-off [25.33978152436581]
We propose a novel framework that reformulates the stability-plasticity trade-off in continual learning.<n>ParetoCL can be seen as an objective augmentation approach that learns from different objective combinations of stability and plasticity.
arXiv Detail & Related papers (2025-03-30T10:38:36Z) - CalFuse: Multi-Modal Continual Learning via Feature Calibration and Parameter Fusion [17.68751409041168]
Class-Continual Learning (CCL) addresses this challenge by incrementally incorporating new class knowledge without revisiting historical data.<n>Recent advances in Vision-Language Models (VLMs) such as CLIP demonstrate significant potential for CCL by leveraging pre-trained multi-modal knowledge.<n>We propose CalFuse, a framework that synergizes feature parameter Fusion to enable effective multi-modal knowledge integration.
arXiv Detail & Related papers (2025-03-24T13:44:12Z) - ReconBoost: Boosting Can Achieve Modality Reconcilement [89.4377895465204]
We study the modality-alternating learning paradigm to achieve reconcilement.
We propose a new method called ReconBoost to update a fixed modality each time.
We show that the proposed method resembles Friedman's Gradient-Boosting (GB) algorithm, where the updated learner can correct errors made by others.
arXiv Detail & Related papers (2024-05-15T13:22:39Z) - Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters [65.15700861265432]
We present a parameter-efficient continual learning framework to alleviate long-term forgetting in incremental learning with vision-language models.
Our approach involves the dynamic expansion of a pre-trained CLIP model, through the integration of Mixture-of-Experts (MoE) adapters.
To preserve the zero-shot recognition capability of vision-language models, we introduce a Distribution Discriminative Auto-Selector.
arXiv Detail & Related papers (2024-03-18T08:00:23Z) - Towards Plastic and Stable Exemplar-Free Incremental Learning: A Dual-Learner Framework with Cumulative Parameter Averaging [12.168402195820649]
We propose a Dual-Learner framework with Cumulative.
Averaging (DLCPA)
We show that DLCPA outperforms several state-of-the-art exemplar-free baselines in both Task-IL and Class-IL settings.
arXiv Detail & Related papers (2023-10-28T08:48:44Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - Learning an evolved mixture model for task-free continual learning [11.540150938141034]
We address the Task-Free Continual Learning (TFCL) in which a model is trained on non-stationary data streams with no explicit task information.
We introduce two simple dropout mechanisms to selectively remove stored examples in order to avoid memory overload.
arXiv Detail & Related papers (2022-07-11T16:01:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.