SGL-PT: A Strong Graph Learner with Graph Prompt Tuning
- URL: http://arxiv.org/abs/2302.12449v2
- Date: Tue, 15 Aug 2023 08:11:16 GMT
- Title: SGL-PT: A Strong Graph Learner with Graph Prompt Tuning
- Authors: Yun Zhu and Jianhao Guo and Siliang Tang
- Abstract summary: We propose a novel framework named SGL-PT which follows the learning strategy Pre-train, Prompt, and Predict''.
Specifically, we raise a strong and universal pre-training task coined as SGL that acquires the complementary merits of generative and contrastive self-supervised graph learning.
And aiming for graph classification task, we unify pre-training and fine-tuning by designing a novel verbalizer-free prompting function, which reformulates the downstream task in a similar format as pretext task.
- Score: 36.650472660276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, much exertion has been paid to design graph self-supervised methods
to obtain generalized pre-trained models, and adapt pre-trained models onto
downstream tasks through fine-tuning. However, there exists an inherent gap
between pretext and downstream graph tasks, which insufficiently exerts the
ability of pre-trained models and even leads to negative transfer. Meanwhile,
prompt tuning has seen emerging success in natural language processing by
aligning pre-training and fine-tuning with consistent training objectives. In
this paper, we identify the challenges for graph prompt tuning: The first is
the lack of a strong and universal pre-training task across sundry pre-training
methods in graph domain. The second challenge lies in the difficulty of
designing a consistent training objective for both pre-training and downstream
tasks. To overcome above obstacles, we propose a novel framework named SGL-PT
which follows the learning strategy ``Pre-train, Prompt, and Predict''.
Specifically, we raise a strong and universal pre-training task coined as SGL
that acquires the complementary merits of generative and contrastive
self-supervised graph learning. And aiming for graph classification task, we
unify pre-training and fine-tuning by designing a novel verbalizer-free
prompting function, which reformulates the downstream task in a similar format
as pretext task. Empirical results show that our method surpasses other
baselines under unsupervised setting, and our prompt tuning method can greatly
facilitate models on biological datasets over fine-tuning methods.
Related papers
- Towards Graph Foundation Models: Learning Generalities Across Graphs via Task-Trees [50.78679002846741]
We introduce a novel approach for learning cross-task generalities in graphs.
We propose task-trees as basic learning instances to align task spaces on graphs.
Our findings indicate that when a graph neural network is pretrained on diverse task-trees, it acquires transferable knowledge.
arXiv Detail & Related papers (2024-12-21T02:07:43Z) - Can Graph Neural Networks Learn Language with Extremely Weak Text Supervision? [62.12375949429938]
Building transferable Graph Neural Networks (GNNs) with CLIP pipeline is challenging because of three fundamental issues.
We leverage multi-modal prompt learning to effectively adapt pre-trained GNN to downstream tasks and data.
Our new paradigm embeds the graphs directly in the same space as the Large Language Models (LLMs) by learning both graph prompts and text prompts simultaneously.
arXiv Detail & Related papers (2024-12-11T08:03:35Z) - Instance-Aware Graph Prompt Learning [71.26108600288308]
We introduce Instance-Aware Graph Prompt Learning (IA-GPL) in this paper.
The process involves generating intermediate prompts for each instance using a lightweight architecture.
Experiments conducted on multiple datasets and settings showcase the superior performance of IA-GPL compared to state-of-the-art baselines.
arXiv Detail & Related papers (2024-11-26T18:38:38Z) - HetGPT: Harnessing the Power of Prompt Tuning in Pre-Trained Heterogeneous Graph Neural Networks [22.775933880072294]
HetGPT is a post-training prompting framework for graph neural networks.
It improves the performance of state-of-the-art HGNNs on semi-supervised node classification.
arXiv Detail & Related papers (2023-10-23T19:35:57Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - GraphPrompt: Unifying Pre-Training and Downstream Tasks for Graph Neural
Networks [16.455234748896157]
GraphPrompt is a novel pre-training and prompting framework on graphs.
It unifies pre-training and downstream tasks into a common task template.
It also employs a learnable prompt to assist a downstream task in locating the most relevant knowledge from the pre-train model.
arXiv Detail & Related papers (2023-02-16T02:51:38Z) - Self-supervised Graph Masking Pre-training for Graph-to-Text Generation [5.108327983929205]
Large-scale pre-trained language models (PLMs) have advanced Graph-to-Text (G2T) generation.
We propose graph masking pre-training strategies that neither require supervision signals nor adjust the architecture of the underlying pre-trained encoder-decoder model.
Our approach achieves new state-of-the-art results on WebNLG+ 2020 and EventNarrative G2T generation datasets.
arXiv Detail & Related papers (2022-10-19T14:44:56Z) - Self-Distillation for Further Pre-training of Transformers [83.84227016847096]
We propose self-distillation as a regularization for a further pre-training stage.
We empirically validate the efficacy of self-distillation on a variety of benchmark datasets for image and text classification tasks.
arXiv Detail & Related papers (2022-09-30T02:25:12Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.