Prompt Generation Networks for Input-based Adaptation of Frozen Vision
Transformers
- URL: http://arxiv.org/abs/2210.06466v2
- Date: Wed, 19 Apr 2023 15:48:45 GMT
- Title: Prompt Generation Networks for Input-based Adaptation of Frozen Vision
Transformers
- Authors: Jochem Loedeman, Maarten C. Stol, Tengda Han, Yuki M. Asano
- Abstract summary: Prompt Generation Network (PGN) generates high performing, input-dependent prompts by sampling from an end-to-end learned library of tokens.
"prompt inversion" trick, with which PGNs can be efficiently trained in a latent space but deployed as strictly input-only prompts for inference.
It surpasses previous methods by a large margin on 12/12 datasets and even outperforms full-finetuning on 5/12, while requiring 100x less parameters.
- Score: 9.080472817672264
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the introduction of the transformer architecture in computer vision,
increasing model scale has been demonstrated as a clear path to achieving
performance and robustness gains. However, with model parameter counts reaching
the billions, classical finetuning approaches are becoming increasingly
limiting and even unfeasible when models become hosted as inference APIs, as in
NLP. To this end, visual prompt learning, whereby a model is adapted by
learning additional inputs, has emerged as a potential solution for adapting
frozen and cloud-hosted models: During inference, this neither requires access
to the internals of models' forward pass function, nor requires any
post-processing. In this work, we propose the Prompt Generation Network (PGN)
that generates high performing, input-dependent prompts by sampling from an
end-to-end learned library of tokens. We further introduce the "prompt
inversion" trick, with which PGNs can be efficiently trained in a latent space
but deployed as strictly input-only prompts for inference. We show the PGN is
effective in adapting pre-trained models to various new datasets: It surpasses
previous methods by a large margin on 12/12 datasets and even outperforms
full-finetuning on 5/12, while requiring 100x less parameters.
Related papers
- Visual Fourier Prompt Tuning [63.66866445034855]
We propose the Visual Fourier Prompt Tuning (VFPT) method as a general and effective solution for adapting large-scale transformer-based models.
Our approach incorporates the Fast Fourier Transform into prompt embeddings and harmoniously considers both spatial and frequency domain information.
Our results demonstrate that our approach outperforms current state-of-the-art baselines on two benchmarks.
arXiv Detail & Related papers (2024-11-02T18:18:35Z) - Gradient Projection For Continual Parameter-Efficient Tuning [42.800411328615894]
We reformulate Adapter, LoRA, Prefix-tuning, and Prompt-tuning from the perspective of gradient projection.
We show that the condition for the gradient can effectively resist forgetting even for large-scale models.
We extensively evaluate our method with different backbones, including ViT and CLIP, on diverse datasets.
arXiv Detail & Related papers (2024-05-22T06:33:48Z) - Feature Distribution Shift Mitigation with Contrastive Pretraining for Intrusion Detection [7.986219763892841]
We show that model pretraining can increase robustness against feature distribution shifts by over 8%.
We also show how an adequate numerical embedding strategy also enhances the performance of pretrained models.
The proposed SwapCon model also outperforms eXtreme Gradient Boosting (XGBoost) and K-Nearest Neighbor (KNN) based models by a large margin.
arXiv Detail & Related papers (2024-04-23T10:15:10Z) - Dynamic Adapter Meets Prompt Tuning: Parameter-Efficient Transfer Learning for Point Cloud Analysis [51.14136878142034]
Point cloud analysis has achieved outstanding performance by transferring point cloud pre-trained models.
Existing methods for model adaptation usually update all model parameters, which is inefficient as it relies on high computational costs.
In this paper, we aim to study parameter-efficient transfer learning for point cloud analysis with an ideal trade-off between task performance and parameter efficiency.
arXiv Detail & Related papers (2024-03-03T08:25:04Z) - E^2VPT: An Effective and Efficient Approach for Visual Prompt Tuning [55.50908600818483]
Fine-tuning large-scale pretrained vision models for new tasks has become increasingly parameter-intensive.
We propose an Effective and Efficient Visual Prompt Tuning (E2VPT) approach for large-scale transformer-based model adaptation.
Our approach outperforms several state-of-the-art baselines on two benchmarks.
arXiv Detail & Related papers (2023-07-25T19:03:21Z) - STAR: Sparse Transformer-based Action Recognition [61.490243467748314]
This work proposes a novel skeleton-based human action recognition model with sparse attention on the spatial dimension and segmented linear attention on the temporal dimension of data.
Experiments show that our model can achieve comparable performance while utilizing much less trainable parameters and achieve high speed in training and inference.
arXiv Detail & Related papers (2021-07-15T02:53:11Z) - Pre-Trained Models for Heterogeneous Information Networks [57.78194356302626]
We propose a self-supervised pre-training and fine-tuning framework, PF-HIN, to capture the features of a heterogeneous information network.
PF-HIN consistently and significantly outperforms state-of-the-art alternatives on each of these tasks, on four datasets.
arXiv Detail & Related papers (2020-07-07T03:36:28Z) - Parameter-Efficient Transfer from Sequential Behaviors for User Modeling
and Recommendation [111.44445634272235]
In this paper, we develop a parameter efficient transfer learning architecture, termed as PeterRec.
PeterRec allows the pre-trained parameters to remain unaltered during fine-tuning by injecting a series of re-learned neural networks.
We perform extensive experimental ablation to show the effectiveness of the learned user representation in five downstream tasks.
arXiv Detail & Related papers (2020-01-13T14:09:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.