Subgraph-level Universal Prompt Tuning
- URL: http://arxiv.org/abs/2402.10380v1
- Date: Fri, 16 Feb 2024 00:25:24 GMT
- Title: Subgraph-level Universal Prompt Tuning
- Authors: Junhyun Lee, Wooseong Yang, Jaewoo Kang
- Abstract summary: We introduce the Subgraph-level Universal Prompt Tuning (SUPT) approach, focusing on the detailed context within subgraphs.
This requires extremely fewer tuning parameters than fine-tuning-based methods, outperforming them in 42 out of 45 full-shot scenario experiments.
In few-shot scenarios, it excels in 41 out of 45 experiments, achieving an average performance increase of more than 6.6%.
- Score: 23.47792674117515
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the evolving landscape of machine learning, the adaptation of pre-trained
models through prompt tuning has become increasingly prominent. This trend is
particularly observable in the graph domain, where diverse pre-training
strategies present unique challenges in developing effective prompt-based
tuning methods for graph neural networks. Previous approaches have been
limited, focusing on specialized prompting functions tailored to models with
edge prediction pre-training tasks. These methods, however, suffer from a lack
of generalizability across different pre-training strategies. Recently, a
simple prompt tuning method has been designed for any pre-training strategy,
functioning within the input graph's feature space. This allows it to
theoretically emulate any type of prompting function, thereby significantly
increasing its versatility for a range of downstream applications.
Nevertheless, the capacity of such simple prompts to fully grasp the complex
contexts found in graphs remains an open question, necessitating further
investigation. Addressing this challenge, our work introduces the
Subgraph-level Universal Prompt Tuning (SUPT) approach, focusing on the
detailed context within subgraphs. In SUPT, prompt features are assigned at the
subgraph-level, preserving the method's universal capability. This requires
extremely fewer tuning parameters than fine-tuning-based methods, outperforming
them in 42 out of 45 full-shot scenario experiments with an average improvement
of over 2.5%. In few-shot scenarios, it excels in 41 out of 45 experiments,
achieving an average performance increase of more than 6.6%.
Related papers
- HGMP:Heterogeneous Graph Multi-Task Prompt Learning [18.703129208282913]
We propose a novel multi-task prompt framework for the heterogeneous graph domain, named HGMP.<n>First, to bridge the gap between the pre-trained model and downstream tasks, we reformulate all downstream tasks into a unified graph-level task format.<n>We design a graph-level contrastive pre-training strategy to better leverage heterogeneous information and enhance performance in multi-task scenarios.
arXiv Detail & Related papers (2025-07-10T04:01:47Z) - Instance-Aware Graph Prompt Learning [71.26108600288308]
We introduce Instance-Aware Graph Prompt Learning (IA-GPL) in this paper.
The process involves generating intermediate prompts for each instance using a lightweight architecture.
Experiments conducted on multiple datasets and settings showcase the superior performance of IA-GPL compared to state-of-the-art baselines.
arXiv Detail & Related papers (2024-11-26T18:38:38Z) - Context-Aware Multimodal Pretraining [72.04020920042574]
We show that vision-language models can be trained to exhibit significantly increased few-shot adaptation.
We find up to four-fold improvements in test-time sample efficiency, and average few-shot adaptation gains of over 5%.
arXiv Detail & Related papers (2024-11-22T17:55:39Z) - Denoising Pre-Training and Customized Prompt Learning for Efficient Multi-Behavior Sequential Recommendation [69.60321475454843]
We propose DPCPL, the first pre-training and prompt-tuning paradigm tailored for Multi-Behavior Sequential Recommendation.
In the pre-training stage, we propose a novel Efficient Behavior Miner (EBM) to filter out the noise at multiple time scales.
Subsequently, we propose to tune the pre-trained model in a highly efficient manner with the proposed Customized Prompt Learning (CPL) module.
arXiv Detail & Related papers (2024-08-21T06:48:38Z) - RELIEF: Reinforcement Learning Empowered Graph Feature Prompt Tuning [15.385771185777626]
"Pre-train, prompt" paradigm has recently extended its generalization ability and data efficiency to graph representation learning.
We propose RELIEF, which employs Reinforcement Learning (RL) to optimize it.
arXiv Detail & Related papers (2024-08-06T13:55:51Z) - Gradient-Regulated Meta-Prompt Learning for Generalizable
Vision-Language Models [137.74524357614285]
We introduce a novel Gradient-RegulAted Meta-prompt learning framework.
It helps pre-training models adapt to downstream tasks in a parameter -- and data -- efficient way.
GRAM can be easily incorporated into various prompt tuning methods in a model-agnostic way.
arXiv Detail & Related papers (2023-03-12T05:03:37Z) - SGL-PT: A Strong Graph Learner with Graph Prompt Tuning [36.650472660276]
We propose a novel framework named SGL-PT which follows the learning strategy Pre-train, Prompt, and Predict''.
Specifically, we raise a strong and universal pre-training task coined as SGL that acquires the complementary merits of generative and contrastive self-supervised graph learning.
And aiming for graph classification task, we unify pre-training and fine-tuning by designing a novel verbalizer-free prompting function, which reformulates the downstream task in a similar format as pretext task.
arXiv Detail & Related papers (2023-02-24T04:31:18Z) - Universal Prompt Tuning for Graph Neural Networks [10.250964386142819]
We introduce a universal prompt-based tuning method called Graph Prompt Feature (GPF) for pre-trained GNN models under any pre-training strategy.
GPF operates on the input graph's feature space and can theoretically achieve an equivalent effect to any form of prompting function.
Our method significantly outperforms existing specialized prompt-based tuning methods when applied to models utilizing the pre-training strategy they specialize in.
arXiv Detail & Related papers (2022-09-30T05:19:27Z) - Generalizing Interactive Backpropagating Refinement for Dense Prediction [0.0]
We introduce a set of G-BRS layers that enable both global and localized refinement for a range of dense prediction tasks.
Our method can successfully generalize and significantly improve performance of existing pretrained state-of-the-art models with only a few clicks.
arXiv Detail & Related papers (2021-12-21T03:52:08Z) - Deep Gaussian Processes for Few-Shot Segmentation [66.08463078545306]
Few-shot segmentation is a challenging task, requiring the extraction of a generalizable representation from only a few annotated samples.
We propose a few-shot learner formulation based on Gaussian process (GP) regression.
Our approach sets a new state-of-the-art for 5-shot segmentation, with mIoU scores of 68.1 and 49.8 on PASCAL-5i and COCO-20i, respectively.
arXiv Detail & Related papers (2021-03-30T17:56:32Z) - Regularizing Meta-Learning via Gradient Dropout [102.29924160341572]
meta-learning models are prone to overfitting when there are no sufficient training tasks for the meta-learners to generalize.
We introduce a simple yet effective method to alleviate the risk of overfitting for gradient-based meta-learning.
arXiv Detail & Related papers (2020-04-13T10:47:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.