Parameter-efficient Prompt Learning for 3D Point Cloud Understanding
- URL: http://arxiv.org/abs/2402.15823v1
- Date: Sat, 24 Feb 2024 14:20:50 GMT
- Title: Parameter-efficient Prompt Learning for 3D Point Cloud Understanding
- Authors: Hongyu Sun and Yongcai Wang and Wang Chen and Haoran Deng and Deying
Li
- Abstract summary: This paper presents a parameter-efficient prompt tuning method to adapt a large multi-modal model for 3D point cloud understanding.
A PromptLearner module is devised to replace hand-crafted prompts with learnable contexts.
A lightweight PointAdapter module is arranged near target tasks to enhance prompt tuning for 3D point cloud understanding.
- Score: 10.23165979353247
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper presents a parameter-efficient prompt tuning method, named PPT, to
adapt a large multi-modal model for 3D point cloud understanding. Existing
strategies are quite expensive in computation and storage, and depend on
time-consuming prompt engineering. We address the problems from three aspects.
Firstly, a PromptLearner module is devised to replace hand-crafted prompts with
learnable contexts to automate the prompt tuning process. Then, we lock the
pre-trained backbone instead of adopting the full fine-tuning paradigm to
substantially improve the parameter efficiency. Finally, a lightweight
PointAdapter module is arranged near target tasks to enhance prompt tuning for
3D point cloud understanding. Comprehensive experiments are conducted to
demonstrate the superior parameter and data efficiency of the proposed
method.Meanwhile, we obtain new records on 4 public datasets and multiple 3D
tasks, i.e., point cloud recognition, few-shot learning, and part segmentation.
The implementation is available at https://github.com/auniquesun/PPT.
Related papers
- PECTP: Parameter-Efficient Cross-Task Prompts for Incremental Vision Transformer [76.39111896665585]
Incremental Learning (IL) aims to learn deep models on sequential tasks continually.
Recent vast pre-trained models (PTMs) have achieved outstanding performance by prompt technique in practical IL without the old samples.
arXiv Detail & Related papers (2024-07-04T10:37:58Z) - Attention Prompt Tuning: Parameter-efficient Adaptation of Pre-trained
Models for Spatiotemporal Modeling [32.603558214472265]
We introduce Attention Prompt Tuning (APT) for video-based applications such as action recognition.
APT involves injecting a set of learnable prompts along with data tokens during fine-tuning while keeping the backbone frozen.
The proposed approach greatly reduces the number of FLOPs and latency while achieving a significant performance boost.
arXiv Detail & Related papers (2024-03-11T17:59:41Z) - Dynamic Adapter Meets Prompt Tuning: Parameter-Efficient Transfer Learning for Point Cloud Analysis [51.14136878142034]
Point cloud analysis has achieved outstanding performance by transferring point cloud pre-trained models.
Existing methods for model adaptation usually update all model parameters, which is inefficient as it relies on high computational costs.
In this paper, we aim to study parameter-efficient transfer learning for point cloud analysis with an ideal trade-off between task performance and parameter efficiency.
arXiv Detail & Related papers (2024-03-03T08:25:04Z) - Point-PEFT: Parameter-Efficient Fine-Tuning for 3D Pre-trained Models [46.42092771753465]
We introduce Point-PEFT, a novel framework for adapting point cloud pre-trained models with minimal learnable parameters.
Specifically, for a pre-trained 3D model, we freeze most of its parameters, and only tune the newly added PEFT modules on downstream tasks.
arXiv Detail & Related papers (2023-10-04T16:49:36Z) - Instance-aware Dynamic Prompt Tuning for Pre-trained Point Cloud Models [64.49254199311137]
We propose a novel Instance-aware Dynamic Prompt Tuning (IDPT) strategy for pre-trained point cloud models.
The essence of IDPT is to develop a dynamic prompt generation module to perceive semantic prior features of each point cloud instance.
In experiments, IDPT outperforms full fine-tuning in most tasks with a mere 7% of the trainable parameters.
arXiv Detail & Related papers (2023-04-14T16:03:09Z) - SPT: Semi-Parametric Prompt Tuning for Multitask Prompted Learning [28.29889045842277]
Multitask prompted learning can help generalization through a diverse set of tasks at once.
We propose SPT, a semi-parametric prompt tuning method for multitask prompted learning.
arXiv Detail & Related papers (2022-12-21T11:18:09Z) - Instance-wise Prompt Tuning for Pretrained Language Models [72.74916121511662]
Instance-wise Prompt Tuning (IPT) is the first prompt learning paradigm that injects knowledge from the input data instances to the prompts.
IPT significantly outperforms task-based prompt learning methods, and achieves comparable performance to conventional finetuning with only 0.5% - 1.5% of tuned parameters.
arXiv Detail & Related papers (2022-06-04T10:08:50Z) - IDPG: An Instance-Dependent Prompt Generation Method [58.45110542003139]
Prompt tuning is a new, efficient NLP transfer learning paradigm that adds a task-specific prompt in each input instance during the model training stage.
We propose a conditional prompt generation method to generate prompts for each input instance.
arXiv Detail & Related papers (2022-04-09T15:45:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.