Few-Shot Class Incremental Learning with Attention-Aware Self-Adaptive Prompt
- URL: http://arxiv.org/abs/2403.09857v3
- Date: Wed, 17 Jul 2024 16:00:27 GMT
- Title: Few-Shot Class Incremental Learning with Attention-Aware Self-Adaptive Prompt
- Authors: Chenxi Liu, Zhenyi Wang, Tianyi Xiong, Ruibo Chen, Yihan Wu, Junfeng Guo, Heng Huang,
- Abstract summary: We propose a novel framework named Attention-aware Self-adaptive Prompt (ASP)
ASP encourages task-invariant prompts to capture shared knowledge by reducing specific information from the attention aspect.
In summary, ASP prevents overfitting on base task and does not require enormous data in few-shot incremental tasks.
- Score: 58.880105981772324
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-Shot Class-Incremental Learning (FSCIL) models aim to incrementally learn new classes with scarce samples while preserving knowledge of old ones. Existing FSCIL methods usually fine-tune the entire backbone, leading to overfitting and hindering the potential to learn new classes. On the other hand, recent prompt-based CIL approaches alleviate forgetting by training prompts with sufficient data in each task. In this work, we propose a novel framework named Attention-aware Self-adaptive Prompt (ASP). ASP encourages task-invariant prompts to capture shared knowledge by reducing specific information from the attention aspect. Additionally, self-adaptive task-specific prompts in ASP provide specific information and transfer knowledge from old classes to new classes with an Information Bottleneck learning objective. In summary, ASP prevents overfitting on base task and does not require enormous data in few-shot incremental tasks. Extensive experiments on three benchmark datasets validate that ASP consistently outperforms state-of-the-art FSCIL and prompt-based CIL methods in terms of both learning new classes and mitigating forgetting.
Related papers
- PECTP: Parameter-Efficient Cross-Task Prompts for Incremental Vision Transformer [76.39111896665585]
Incremental Learning (IL) aims to learn deep models on sequential tasks continually.
Recent vast pre-trained models (PTMs) have achieved outstanding performance by prompt technique in practical IL without the old samples.
arXiv Detail & Related papers (2024-07-04T10:37:58Z) - Class-Incremental Few-Shot Event Detection [68.66116956283575]
This paper proposes a new task, called class-incremental few-shot event detection.
This task faces two problems, i.e., old knowledge forgetting and new class overfitting.
To solve these problems, this paper presents a novel knowledge distillation and prompt learning based method, called Prompt-KD.
arXiv Detail & Related papers (2024-04-02T09:31:14Z) - Convolutional Prompting meets Language Models for Continual Learning [4.115213208594654]
Continual Learning (CL) enables machine learning models to learn from continuously shifting new training data in absence of data from old tasks.
We propose ConvPrompt, a novel convolutional prompt creation mechanism that maintains layer-wise shared embeddings.
The intelligent use of convolution enables us to maintain a low parameter overhead without compromising performance.
arXiv Detail & Related papers (2024-03-29T17:40:37Z) - Towards Non-Exemplar Semi-Supervised Class-Incremental Learning [33.560003528712414]
Class-incremental learning aims to gradually recognize new classes while maintaining the discriminability of old ones.
We propose a non-exemplar semi-supervised CIL framework with contrastive learning and semi-supervised incremental prototype classifier (Semi-IPC)
Semi-IPC learns a prototype for each class with unsupervised regularization, enabling the model to incrementally learn from partially labeled new data.
arXiv Detail & Related papers (2024-03-27T06:28:19Z) - Enhancing Visual Continual Learning with Language-Guided Supervision [76.38481740848434]
Continual learning aims to empower models to learn new tasks without forgetting previously acquired knowledge.
We argue that the scarce semantic information conveyed by the one-hot labels hampers the effective knowledge transfer across tasks.
Specifically, we use PLMs to generate semantic targets for each class, which are frozen and serve as supervision signals.
arXiv Detail & Related papers (2024-03-24T12:41:58Z) - Federated Class-Incremental Learning with Prompting [18.52169733483851]
We propose a novel method called Federated Class-Incremental Learning with PrompTing.
We encode the task-relevant and task-irrelevant knowledge into prompts, preserving the old and new knowledge of the local clients.
FCI achieves significant accuracy improvements over the state-of-the-art methods.
arXiv Detail & Related papers (2023-10-13T08:35:02Z) - POP: Prompt Of Prompts for Continual Learning [59.15888651733645]
Continual learning (CL) aims to mimic the human ability to learn new concepts without catastrophic forgetting.
We show that a foundation model equipped with POP learning is able to outperform classic CL methods by a significant margin.
arXiv Detail & Related papers (2023-06-14T02:09:26Z) - iTAML: An Incremental Task-Agnostic Meta-learning Approach [123.10294801296926]
Humans can continuously learn new knowledge as their experience grows.
Previous learning in deep neural networks can quickly fade out when they are trained on a new task.
We introduce a novel meta-learning approach that seeks to maintain an equilibrium between all encountered tasks.
arXiv Detail & Related papers (2020-03-25T21:42:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.