Understanding and Mitigating Overfitting in Prompt Tuning for
Vision-Language Models
- URL: http://arxiv.org/abs/2211.02219v1
- Date: Fri, 4 Nov 2022 02:06:22 GMT
- Title: Understanding and Mitigating Overfitting in Prompt Tuning for
Vision-Language Models
- Authors: Chengcheng Ma, Yang Liu, Jiankang Deng, LingXi Xie, Weiming Dong,
Changsheng Xu
- Abstract summary: We propose Subspace Prompt Tuning (SubPT) to project the gradients in back-propagation onto the low-rank subspace spanned by the early-stage gradient flow eigenvectors during the entire training process.
We equip CoOp with Novel Learner Feature (NFL) to enhance the generalization ability of the learned prompts onto novel categories beyond the training set.
- Score: 108.13378788663196
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pre-trained Vision-Language Models (VLMs) such as CLIP have shown impressive
generalization capability in downstream vision tasks with appropriate text
prompts. Instead of designing prompts manually, Context Optimization (CoOp) has
been recently proposed to learn continuous prompts using task-specific training
data. Despite the performance improvements on downstream tasks, several studies
have reported that CoOp suffers from the overfitting issue in two aspects: (i)
the test accuracy on base classes first gets better and then gets worse during
training; (ii) the test accuracy on novel classes keeps decreasing. However,
none of the existing studies can understand and mitigate such overfitting
problem effectively. In this paper, we first explore the cause of overfitting
by analyzing the gradient flow. Comparative experiments reveal that CoOp favors
generalizable and spurious features in the early and later training stages
respectively, leading to the non-overfitting and overfitting phenomenon. Given
those observations, we propose Subspace Prompt Tuning (SubPT) to project the
gradients in back-propagation onto the low-rank subspace spanned by the
early-stage gradient flow eigenvectors during the entire training process, and
successfully eliminate the overfitting problem. Besides, we equip CoOp with
Novel Feature Learner (NFL) to enhance the generalization ability of the
learned prompts onto novel categories beyond the training set, needless of
image training data. Extensive experiments on 11 classification datasets
demonstrate that SubPT+NFL consistently boost the performance of CoOp and
outperform the state-of-the-art approach CoCoOp. Experiments on more
challenging vision downstream tasks including open-vocabulary object detection
and zero-shot semantic segmentation also verify the effectiveness of the
proposed method. Codes can be found at https://tinyurl.com/mpe64f89.
Related papers
- IntCoOp: Interpretability-Aware Vision-Language Prompt Tuning [94.52149969720712]
IntCoOp learns to jointly align attribute-level inductive biases and class embeddings during prompt-tuning.
IntCoOp improves CoOp by 7.35% in average performance across 10 diverse datasets.
arXiv Detail & Related papers (2024-06-19T16:37:31Z) - Visual Prompt Tuning in Null Space for Continual Learning [51.96411454304625]
Existing prompt-tuning methods have demonstrated impressive performances in continual learning (CL)
This paper aims to learn each task by tuning the prompts in the direction orthogonal to the subspace spanned by previous tasks' features.
In practice, an effective null-space-based approximation solution has been proposed to implement the prompt gradient projection.
arXiv Detail & Related papers (2024-06-09T05:57:40Z) - AAPL: Adding Attributes to Prompt Learning for Vision-Language Models [6.32186874112557]
We propose adversarial token embedding to disentangle low-level visual augmentation features from high-level class information when inducing bias in learnable prompts.
We have conducted experiments across 11 datasets, and overall, AAPL shows favorable performances compared to the existing methods in few-shot learning, zero-shot learning, cross-dataset, and domain generalization tasks.
arXiv Detail & Related papers (2024-04-25T17:51:10Z) - Revisiting the Power of Prompt for Visual Tuning [50.11465784194896]
This study explores the correlation evolvement between prompts and patch tokens during proficient training.
Inspired by the observation that the prompt tokens tend to share high mutual information with patch tokens, we propose initializing prompts with downstream token prototypes.
Our method significantly advances the adaptation for self-supervised pretraining, achieving impressive task performance gains of at least 10% to 30%.
arXiv Detail & Related papers (2024-02-04T07:49:02Z) - Hierarchical Decomposition of Prompt-Based Continual Learning:
Rethinking Obscured Sub-optimality [55.88910947643436]
Self-supervised pre-training is essential for handling vast quantities of unlabeled data in practice.
HiDe-Prompt is an innovative approach that explicitly optimize the hierarchical components with an ensemble of task-specific prompts and statistics.
Our experiments demonstrate the superior performance of HiDe-Prompt and its robustness to pre-training paradigms in continual learning.
arXiv Detail & Related papers (2023-10-11T06:51:46Z) - Consistency-guided Prompt Learning for Vision-Language Models [23.4909421082857]
We propose Consistency-guided Prompt learning (CoPrompt), a new fine-tuning method for vision-language models.
Our approach improves the generalization of large foundation models when fine-tuned on downstream tasks in a few-shot setting.
arXiv Detail & Related papers (2023-06-01T23:20:47Z) - Improved Visual Fine-tuning with Natural Language Supervision [36.250244364023665]
Fine-tuning a visual pre-trained model can leverage the semantic information from large-scale pre-training data.
The problem of catastrophic forgetting in pre-trained backbone has been extensively studied for fine-tuning.
We introduce a reference distribution obtained from a fixed text classifier, which can help regularize the learned vision classifier.
arXiv Detail & Related papers (2023-04-04T03:08:02Z) - Conditional Prompt Learning for Vision-Language Models [107.06776396086471]
A recently proposed method named Context Optimization (CoOp) turns context words in a prompt into a set of learnable vectors.
CoOp generalizes much better than CoOp to unseen classes, even showing promising transferability beyond a single dataset.
Our experiments show that CoCoOp generalizes much better than CoOp to unseen classes, even showing promising transferability beyond a single dataset.
arXiv Detail & Related papers (2022-03-10T18:59:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.