Vector Quantization Prompting for Continual Learning
- URL: http://arxiv.org/abs/2410.20444v1
- Date: Sun, 27 Oct 2024 13:43:53 GMT
- Title: Vector Quantization Prompting for Continual Learning
- Authors: Li Jiao, Qiuxia Lai, Yu Li, Qiang Xu,
- Abstract summary: Continual learning requires to overcome catastrophic forgetting when training a single model on a sequence of tasks.
Recent top-performing approaches are prompt-based methods that utilize a set of learnable parameters to encode task knowledge.
We propose VQ-Prompt, a prompt-based continual learning method that incorporates Vector Quantization into end-to-end training of a set of discrete prompts.
- Score: 23.26682439914273
- License:
- Abstract: Continual learning requires to overcome catastrophic forgetting when training a single model on a sequence of tasks. Recent top-performing approaches are prompt-based methods that utilize a set of learnable parameters (i.e., prompts) to encode task knowledge, from which appropriate ones are selected to guide the fixed pre-trained model in generating features tailored to a certain task. However, existing methods rely on predicting prompt identities for prompt selection, where the identity prediction process cannot be optimized with task loss. This limitation leads to sub-optimal prompt selection and inadequate adaptation of pre-trained features for a specific task. Previous efforts have tried to address this by directly generating prompts from input queries instead of selecting from a set of candidates. However, these prompts are continuous, which lack sufficient abstraction for task knowledge representation, making them less effective for continual learning. To address these challenges, we propose VQ-Prompt, a prompt-based continual learning method that incorporates Vector Quantization (VQ) into end-to-end training of a set of discrete prompts. In this way, VQ-Prompt can optimize the prompt selection process with task loss and meanwhile achieve effective abstraction of task knowledge for continual learning. Extensive experiments show that VQ-Prompt outperforms state-of-the-art continual learning methods across a variety of benchmarks under the challenging class-incremental setting. The code is available at \href{https://github.com/jiaolifengmi/VQ-Prompt}{this https URL}.
Related papers
- PECTP: Parameter-Efficient Cross-Task Prompts for Incremental Vision Transformer [76.39111896665585]
Incremental Learning (IL) aims to learn deep models on sequential tasks continually.
Recent vast pre-trained models (PTMs) have achieved outstanding performance by prompt technique in practical IL without the old samples.
arXiv Detail & Related papers (2024-07-04T10:37:58Z) - Q-Tuning: Queue-based Prompt Tuning for Lifelong Few-shot Language Learning [21.261637357094035]
textbfQ-tuning enables lifelong learning of a pre-trained language model.
When learning a new task, Q-tuning trains a task-specific prompt by adding it to a prompt queue consisting of the prompts from older tasks.
arXiv Detail & Related papers (2024-04-22T22:04:16Z) - Semantic Prompting with Image-Token for Continual Learning [7.5140668729696145]
I-Prompt is a task-agnostic approach to eliminate task prediction.
Our method achieves competitive performance on four benchmarks.
We demonstrate the superiority of our method across various scenarios through extensive experiments.
arXiv Detail & Related papers (2024-03-18T07:43:14Z) - Consistent Prompting for Rehearsal-Free Continual Learning [5.166083532861163]
Continual learning empowers models to adapt autonomously to the ever-changing environment or data streams without forgetting old knowledge.
Existing prompt-based methods are inconsistent between training and testing, limiting their effectiveness.
We propose a novel prompt-based method, Consistent Prompting (CPrompt), for more aligned training and testing.
arXiv Detail & Related papers (2024-03-13T14:24:09Z) - Hierarchical Prompts for Rehearsal-free Continual Learning [67.37739666753008]
Continual learning endeavors to equip the model with the capability to integrate current task knowledge while mitigating the forgetting of past task knowledge.
Inspired by prompt tuning, prompt-based methods maintain a frozen backbone and train with slight learnable prompts.
This paper introduces a novel rehearsal-free paradigm for continual learning termed Hierarchical Prompts (H-Prompts)
arXiv Detail & Related papers (2024-01-21T16:59:44Z) - Active Instruction Tuning: Improving Cross-Task Generalization by
Training on Prompt Sensitive Tasks [101.40633115037983]
Instruction tuning (IT) achieves impressive zero-shot generalization results by training large language models (LLMs) on a massive amount of diverse tasks with instructions.
How to select new tasks to improve the performance and generalizability of IT models remains an open question.
We propose active instruction tuning based on prompt uncertainty, a novel framework to identify informative tasks, and then actively tune the models on the selected tasks.
arXiv Detail & Related papers (2023-11-01T04:40:05Z) - Reinforcement Learning with Success Induced Task Prioritization [68.8204255655161]
We introduce Success Induced Task Prioritization (SITP), a framework for automatic curriculum learning.
The algorithm selects the order of tasks that provide the fastest learning for agents.
We demonstrate that SITP matches or surpasses the results of other curriculum design methods.
arXiv Detail & Related papers (2022-12-30T12:32:43Z) - Instance-wise Prompt Tuning for Pretrained Language Models [72.74916121511662]
Instance-wise Prompt Tuning (IPT) is the first prompt learning paradigm that injects knowledge from the input data instances to the prompts.
IPT significantly outperforms task-based prompt learning methods, and achieves comparable performance to conventional finetuning with only 0.5% - 1.5% of tuned parameters.
arXiv Detail & Related papers (2022-06-04T10:08:50Z) - Continual Prompt Tuning for Dialog State Tracking [58.66412648276873]
A desirable dialog system should be able to continually learn new skills without forgetting old ones.
We present Continual Prompt Tuning, a parameter-efficient framework that not only avoids forgetting but also enables knowledge transfer between tasks.
arXiv Detail & Related papers (2022-03-13T13:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.