Adapter-Enhanced Semantic Prompting for Continual Learning
- URL: http://arxiv.org/abs/2412.11074v1
- Date: Sun, 15 Dec 2024 06:14:55 GMT
- Title: Adapter-Enhanced Semantic Prompting for Continual Learning
- Authors: Baocai Yin, Ji Zhao, Huajie Jiang, Ningning Hou, Yongli Hu, Amin Beheshti, Ming-Hsuan Yang, Yuankai Qi,
- Abstract summary: Continual learning (CL) enables models to adapt to evolving data streams.
Traditional methods usually retain the past data for replay or add additional branches in the model to learn new knowledge.
We propose a novel lightweight CL framework, which integrates prompt tuning and adapter techniques.
- Score: 91.63494614012362
- License:
- Abstract: Continual learning (CL) enables models to adapt to evolving data streams. A major challenge of CL is catastrophic forgetting, where new knowledge will overwrite previously acquired knowledge. Traditional methods usually retain the past data for replay or add additional branches in the model to learn new knowledge, which has high memory requirements. In this paper, we propose a novel lightweight CL framework, Adapter-Enhanced Semantic Prompting (AESP), which integrates prompt tuning and adapter techniques. Specifically, we design semantic-guided prompts to enhance the generalization ability of visual features and utilize adapters to efficiently fuse the semantic information, aiming to learn more adaptive features for the continual learning task. Furthermore, to choose the right task prompt for feature adaptation, we have developed a novel matching mechanism for prompt selection. Extensive experiments on three CL datasets demonstrate that our approach achieves favorable performance across multiple metrics, showing its potential for advancing CL.
Related papers
- LW2G: Learning Whether to Grow for Prompt-based Continual Learning [15.766350352592331]
Recent Prompt-based Continual Learning (PCL) has achieved remarkable performance with Pre-Trained Models (PTMs)
We propose a plug-in module in the former stage to textbfLearn Whether to Grow (LW2G) based on the disparities between tasks.
Inspired by Gradient Projection Continual Learning, our LW2G develops a metric called Hinder Forward Capability (HFC) to measure the hindrance imposed on learning new tasks.
arXiv Detail & Related papers (2024-09-27T15:55:13Z) - Auto-selected Knowledge Adapters for Lifelong Person Re-identification [54.42307214981537]
Lifelong Person Re-Identification requires systems to continually learn from non-overlapping datasets across different times and locations.
Existing approaches, either rehearsal-free or rehearsal-based, still suffer from the problem of catastrophic forgetting.
We introduce a novel framework AdalReID, that adopts knowledge adapters and a parameter-free auto-selection mechanism for lifelong learning.
arXiv Detail & Related papers (2024-05-29T11:42:02Z) - Convolutional Prompting meets Language Models for Continual Learning [4.115213208594654]
Continual Learning (CL) enables machine learning models to learn from continuously shifting new training data in absence of data from old tasks.
We propose ConvPrompt, a novel convolutional prompt creation mechanism that maintains layer-wise shared embeddings.
The intelligent use of convolution enables us to maintain a low parameter overhead without compromising performance.
arXiv Detail & Related papers (2024-03-29T17:40:37Z) - Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters [65.15700861265432]
We present a parameter-efficient continual learning framework to alleviate long-term forgetting in incremental learning with vision-language models.
Our approach involves the dynamic expansion of a pre-trained CLIP model, through the integration of Mixture-of-Experts (MoE) adapters.
To preserve the zero-shot recognition capability of vision-language models, we introduce a Distribution Discriminative Auto-Selector.
arXiv Detail & Related papers (2024-03-18T08:00:23Z) - Class Incremental Learning with Pre-trained Vision-Language Models [59.15538370859431]
We propose an approach to exploiting pre-trained vision-language models (e.g. CLIP) that enables further adaptation.
Experiments on several conventional benchmarks consistently show a significant margin of improvement over the current state-of-the-art.
arXiv Detail & Related papers (2023-10-31T10:45:03Z) - Tip-Adapter: Training-free Adaption of CLIP for Few-shot Classification [58.06983806317233]
Contrastive Vision-Language Pre-training, known as CLIP, has provided a new paradigm for learning visual representations using large-scale image-text pairs.
To enhance CLIP's adaption capability, existing methods proposed to fine-tune additional learnable modules.
We propose a training-free adaption method for CLIP to conduct few-shot classification, termed as Tip-Adapter.
arXiv Detail & Related papers (2022-07-19T19:12:11Z) - Effects of Auxiliary Knowledge on Continual Learning [16.84113206569365]
In Continual Learning (CL), a neural network is trained on a stream of data whose distribution changes over time.
Most existing CL approaches focus on finding solutions to preserve acquired knowledge, so working on the past of the model.
We argue that as the model has to continually learn new tasks, it is also important to put focus on the present knowledge that could improve following tasks learning.
arXiv Detail & Related papers (2022-06-03T14:31:59Z) - CLIP-Adapter: Better Vision-Language Models with Feature Adapters [79.52844563138493]
We show that there is an alternative path to achieve better vision-language models other than prompt tuning.
In this paper, we propose CLIP-Adapter to conduct fine-tuning with feature adapters on either visual or language branch.
Experiments and extensive ablation studies on various visual classification tasks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2021-10-09T11:39:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.