All in One: Multi-task Prompting for Graph Neural Networks
- URL: http://arxiv.org/abs/2307.01504v2
- Date: Sun, 17 Dec 2023 08:36:44 GMT
- Title: All in One: Multi-task Prompting for Graph Neural Networks
- Authors: Xiangguo Sun, Hong Cheng, Jia Li, Bo Liu, Jihong Guan
- Abstract summary: We propose a novel multi-task prompting method for graph models.
We first unify the format of graph prompts and language prompts with the prompt token, token structure, and inserting pattern.
We then study the task space of various graph applications and reformulate downstream problems to the graph-level task.
- Score: 30.457491401821652
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, ''pre-training and fine-tuning'' has been adopted as a standard
workflow for many graph tasks since it can take general graph knowledge to
relieve the lack of graph annotations from each application. However, graph
tasks with node level, edge level, and graph level are far diversified, making
the pre-training pretext often incompatible with these multiple tasks. This gap
may even cause a ''negative transfer'' to the specific application, leading to
poor results. Inspired by the prompt learning in natural language processing
(NLP), which has presented significant effectiveness in leveraging prior
knowledge for various NLP tasks, we study the prompting topic for graphs with
the motivation of filling the gap between pre-trained models and various graph
tasks. In this paper, we propose a novel multi-task prompting method for graph
models. Specifically, we first unify the format of graph prompts and language
prompts with the prompt token, token structure, and inserting pattern. In this
way, the prompting idea from NLP can be seamlessly introduced to the graph
area. Then, to further narrow the gap between various graph tasks and
state-of-the-art pre-training strategies, we further study the task space of
various graph applications and reformulate downstream problems to the
graph-level task. Afterward, we introduce meta-learning to efficiently learn a
better initialization for the multi-task prompt of graphs so that our prompting
framework can be more reliable and general for different tasks. We conduct
extensive experiments, results from which demonstrate the superiority of our
method.
Related papers
- Instance-Aware Graph Prompt Learning [71.26108600288308]
We introduce Instance-Aware Graph Prompt Learning (IA-GPL) in this paper.
The process involves generating intermediate prompts for each instance using a lightweight architecture.
Experiments conducted on multiple datasets and settings showcase the superior performance of IA-GPL compared to state-of-the-art baselines.
arXiv Detail & Related papers (2024-11-26T18:38:38Z) - ProG: A Graph Prompt Learning Benchmark [17.229372585695092]
Graph prompt learning emerges as a promising alternative to 'Pre-train & Fine-tune'
We introduce the first comprehensive benchmark for graph prompt learning.
We present 'ProG', an easy-to-use open-source library that streamlines the execution of various graph prompt models.
arXiv Detail & Related papers (2024-06-08T04:17:48Z) - All in One: Multi-Task Prompting for Graph Neural Networks (Extended
Abstract) [30.457491401821652]
This paper is an extended abstract of our original work published in KDD23, where we won the best research paper award.
It introduces a novel approach to bridging the gap between pre-trained graph models and the diverse tasks they're applied to.
arXiv Detail & Related papers (2024-03-11T16:04:58Z) - Generalized Graph Prompt: Toward a Unification of Pre-Training and Downstream Tasks on Graphs [20.406549548630156]
GraphPrompt is a novel pre-training and prompting framework on graphs.
It unifies pre-training and downstream tasks into a common task template.
It also employs a learnable prompt to assist a downstream task in locating the most relevant knowledge from the pre-trained model.
arXiv Detail & Related papers (2023-11-26T14:35:28Z) - ULTRA-DP: Unifying Graph Pre-training with Multi-task Graph Dual Prompt [67.8934749027315]
We propose a unified framework for graph hybrid pre-training which injects the task identification and position identification into GNNs.
We also propose a novel pre-training paradigm based on a group of $k$-nearest neighbors.
arXiv Detail & Related papers (2023-10-23T12:11:13Z) - One for All: Towards Training One Graph Model for All Classification Tasks [61.656962278497225]
A unified model for various graph tasks remains underexplored, primarily due to the challenges unique to the graph learning domain.
We propose textbfOne for All (OFA), the first general framework that can use a single graph model to address the above challenges.
OFA performs well across different tasks, making it the first general-purpose across-domains classification model on graphs.
arXiv Detail & Related papers (2023-09-29T21:15:26Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - GraphPrompt: Unifying Pre-Training and Downstream Tasks for Graph Neural
Networks [16.455234748896157]
GraphPrompt is a novel pre-training and prompting framework on graphs.
It unifies pre-training and downstream tasks into a common task template.
It also employs a learnable prompt to assist a downstream task in locating the most relevant knowledge from the pre-train model.
arXiv Detail & Related papers (2023-02-16T02:51:38Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.