Feature Calibration enhanced Parameter Synthesis for CLIP-based Class-incremental Learning
- URL: http://arxiv.org/abs/2503.18672v4
- Date: Thu, 17 Apr 2025 12:26:16 GMT
- Title: Feature Calibration enhanced Parameter Synthesis for CLIP-based Class-incremental Learning
- Authors: Juncen Guo, Yang Liu, Xiaoguang Zhu, Lianlong Sun, Liangyu Teng, Jingyi Wu, Di Li, Wei Zhou, Liang Song,
- Abstract summary: Class-Incremental Learning (CIL) enables models to continuously learn new class knowledge while retaining previous classes.<n>Traditional CIL methods rely primarily on visual features, which limits their effectiveness in complex, multimodal scenarios.<n>We propose a Feature Enhanced Synthesis (FCPS) framework that mitigates catastrophic generalization while preserving the model's intrinsic generalization capability.
- Score: 10.253058594622017
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Class-Incremental Learning (CIL) enables models to continuously learn new class knowledge while retaining previous classes, facilitating adaptation and evolution in dynamic, real-world environments. Traditional CIL methods primarily rely on visual features, which limits their effectiveness in complex, multimodal scenarios. In contrast, VLMs show promising potential for enhancing CIL by leveraging pre-trained knowledge and integrating multi-modal semantic cues such as text and vision. However, existing approaches struggle to mitigate catastrophic forgetting while preserving the generalization strengths of VLMs across diverse modalities. To address these challenges, we propose a Feature Calibration Enhanced Parameter Synthesis (FCPS) framework. Specifically, FCPS introduces a dynamic parameter adjustment mechanism that iteratively calibrates the contribution of original visual features to the final class decision, thus preserving the model's intrinsic generalization capability across modalities. Simultaneously, parameter integration enables effective knowledge transfer, maintaining a balance between acquiring new class representations and preserving old knowledge. Experimental results on popular benchmarks (e.g., CIFAR100 and ImageNet100) validate the superiority of the proposed method.
Related papers
- Self-Controlled Dynamic Expansion Model for Continual Learning [10.447232167638816]
This paper introduces an innovative Self-Controlled Dynamic Expansion Model (SCDEM)
SCDEM orchestrates multiple trainable pre-trained ViT backbones to furnish diverse and semantically enriched representations.
An extensive series of experiments have been conducted to evaluate the proposed methodology's efficacy.
arXiv Detail & Related papers (2025-04-14T15:22:51Z) - BECAME: BayEsian Continual Learning with Adaptive Model MErging [21.642774366793997]
We introduce a two-stage framework named BECAME, which synergizes the expertise of gradient projection and adaptive merging.
Our approach outperforms state-of-the-art CL methods and existing merging strategies.
arXiv Detail & Related papers (2025-04-03T15:07:28Z) - Semi-supervised Semantic Segmentation with Multi-Constraint Consistency Learning [81.02648336552421]
We propose a Multi-Constraint Consistency Learning approach to facilitate the staged enhancement of the encoder and decoder.
Self-adaptive feature masking and noise injection are designed in an instance-specific manner to perturb the features for robust learning of the decoder.
Experimental results on Pascal VOC2012 and Cityscapes datasets demonstrate that our proposed MCCL achieves new state-of-the-art performance.
arXiv Detail & Related papers (2025-03-23T03:21:33Z) - Will Pre-Training Ever End? A First Step Toward Next-Generation Foundation MLLMs via Self-Improving Systematic Cognition [86.21199607040147]
Self-Improving cognition (SIcog) is a self-learning framework for constructing next-generation foundation language models.
We introduce Chain-of-Description, a step-by-step visual understanding method, and integrate structured chain-of-thought (CoT) reasoning to support in-depth multimodal reasoning.
Extensive experiments demonstrate that SIcog produces next-generation foundation MLLMs with substantially improved multimodal cognition.
arXiv Detail & Related papers (2025-03-16T00:25:13Z) - Sculpting [CLS] Features for Pre-Trained Model-Based Class-Incremental Learning [3.73232466691291]
Class-incremental learning requires models to continually acquire knowledge of new classes without forgetting old ones.<n>Although pre-trained models have demonstrated strong performance in class-incremental learning, they remain susceptible to catastrophic forgetting when learning new concepts.<n>We introduce a new parameter-efficient fine-tuning module 'Learn and Calibrate', or LuCA, designed to acquire knowledge through an adapter-calibrator couple.<n>For each learning session, we deploy a sparse LuCA module on top of the last token, which we refer to as 'Token-level Sparse and Adaptation', or TO
arXiv Detail & Related papers (2025-02-20T17:37:08Z) - Retaining and Enhancing Pre-trained Knowledge in Vision-Language Models with Prompt Ensembling [5.6987175375687995]
We introduce a novel prompt ensemble learning approach called Group-wise Prompt Ensemble (GPE)<n>Our method aims to enhance CLIP's zero-shot capabilities by incorporating new domain knowledge while improving its robustness against data distribution shifts.<n>Our approach hinges on three main strategies: prompt grouping with masked attention to optimize CLIP's adaptability while safeguarding its zero-shot capabilities; the incorporation of auxiliary prompts for the seamless integration of new domain insights without disrupting the original model's representation; and an ensemble learning strategy that effectively merges original and new knowledge.
arXiv Detail & Related papers (2024-12-10T00:40:31Z) - SLCA++: Unleash the Power of Sequential Fine-tuning for Continual Learning with Pre-training [68.7896349660824]
We present an in-depth analysis of the progressive overfitting problem from the lens of Seq FT.
Considering that the overly fast representation learning and the biased classification layer constitute this particular problem, we introduce the advanced Slow Learner with Alignment (S++) framework.
Our approach involves a Slow Learner to selectively reduce the learning rate of backbone parameters, and a Alignment to align the disjoint classification layers in a post-hoc fashion.
arXiv Detail & Related papers (2024-08-15T17:50:07Z) - CLIP with Generative Latent Replay: a Strong Baseline for Incremental Learning [17.614980614656407]
We propose Continual Generative training for Incremental prompt-Learning.
We exploit Variational Autoencoders to learn class-conditioned distributions.
We show that such a generative replay approach can adapt to new tasks while improving zero-shot capabilities.
arXiv Detail & Related papers (2024-07-22T16:51:28Z) - Mamba-FSCIL: Dynamic Adaptation with Selective State Space Model for Few-Shot Class-Incremental Learning [113.89327264634984]
Few-shot class-incremental learning (FSCIL) confronts the challenge of integrating new classes into a model with minimal training samples.
Traditional methods widely adopt static adaptation relying on a fixed parameter space to learn from data that arrive sequentially.
We propose a dual selective SSM projector that dynamically adjusts the projection parameters based on the intermediate features for dynamic adaptation.
arXiv Detail & Related papers (2024-07-08T17:09:39Z) - Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters [65.15700861265432]
We present a parameter-efficient continual learning framework to alleviate long-term forgetting in incremental learning with vision-language models.
Our approach involves the dynamic expansion of a pre-trained CLIP model, through the integration of Mixture-of-Experts (MoE) adapters.
To preserve the zero-shot recognition capability of vision-language models, we introduce a Distribution Discriminative Auto-Selector.
arXiv Detail & Related papers (2024-03-18T08:00:23Z) - Learning Prompt with Distribution-Based Feature Replay for Few-Shot Class-Incremental Learning [56.29097276129473]
We propose a simple yet effective framework, named Learning Prompt with Distribution-based Feature Replay (LP-DiF)<n>To prevent the learnable prompt from forgetting old knowledge in the new session, we propose a pseudo-feature replay approach.<n>When progressing to a new session, pseudo-features are sampled from old-class distributions combined with training images of the current session to optimize the prompt.
arXiv Detail & Related papers (2024-01-03T07:59:17Z) - SEER-ZSL: Semantic Encoder-Enhanced Representations for Generalized Zero-Shot Learning [0.6792605600335813]
Zero-Shot Learning (ZSL) presents the challenge of identifying categories not seen during training.<n>We introduce a Semantic-Enhanced Representations for Zero-Shot Learning (SEER-ZSL)<n>First, we aim to distill meaningful semantic information using a probabilistic encoder, enhancing the semantic consistency and robustness.<n>Second, we distill the visual space by exploiting the learned data distribution through an adversarially trained generator. Third, we align the distilled information, enabling a mapping of unseen categories onto the true data manifold.
arXiv Detail & Related papers (2023-12-20T15:18:51Z) - Class Incremental Learning with Pre-trained Vision-Language Models [59.15538370859431]
We propose an approach to exploiting pre-trained vision-language models (e.g. CLIP) that enables further adaptation.
Experiments on several conventional benchmarks consistently show a significant margin of improvement over the current state-of-the-art.
arXiv Detail & Related papers (2023-10-31T10:45:03Z) - Synthetic Sample Selection for Generalized Zero-Shot Learning [4.264192013842096]
Generalized Zero-Shot Learning (GZSL) has emerged as a pivotal research domain in computer vision.
This paper proposes a novel approach for synthetic feature selection using reinforcement learning.
arXiv Detail & Related papers (2023-04-06T03:22:43Z) - Self-Supervised Class Incremental Learning [51.62542103481908]
Existing Class Incremental Learning (CIL) methods are based on a supervised classification framework sensitive to data labels.
When updating them based on the new class data, they suffer from catastrophic forgetting: the model cannot discern old class data clearly from the new.
In this paper, we explore the performance of Self-Supervised representation learning in Class Incremental Learning (SSCIL) for the first time.
arXiv Detail & Related papers (2021-11-18T06:58:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.