All in One: Multi-Task Prompting for Graph Neural Networks (Extended
Abstract)
- URL: http://arxiv.org/abs/2403.07040v1
- Date: Mon, 11 Mar 2024 16:04:58 GMT
- Title: All in One: Multi-Task Prompting for Graph Neural Networks (Extended
Abstract)
- Authors: Xiangguo Sun, Hong Cheng, Jia Li, Bo Liu, Jihong Guan
- Abstract summary: This paper is an extended abstract of our original work published in KDD23, where we won the best research paper award.
It introduces a novel approach to bridging the gap between pre-trained graph models and the diverse tasks they're applied to.
- Score: 30.457491401821652
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper is an extended abstract of our original work published in KDD23,
where we won the best research paper award (Xiangguo Sun, Hong Cheng, Jia Li,
Bo Liu, and Jihong Guan. All in one: Multi-task prompting for graph neural
networks. KDD 23) The paper introduces a novel approach to bridging the gap
between pre-trained graph models and the diverse tasks they're applied to,
inspired by the success of prompt learning in NLP. Recognizing the challenge of
aligning pre-trained models with varied graph tasks (node level, edge level,
and graph level), which can lead to negative transfer and poor performance, we
propose a multi-task prompting method for graphs. This method involves unifying
graph and language prompt formats, enabling NLP's prompting strategies to be
adapted for graph tasks. By analyzing the task space of graph applications, we
reformulate problems to fit graph-level tasks and apply meta-learning to
improve prompt initialization for multiple tasks. Experiments show our method's
effectiveness in enhancing model performance across different graph tasks.
Beyond the original work, in this extended abstract, we further discuss the
graph prompt from a bigger picture and provide some of the latest work toward
this area.
Related papers
- Can Graph Learning Improve Task Planning? [61.47027387839096]
Task planning is emerging as an important research topic alongside the development of large language models (LLMs)
In this paper, we explore graph learning-based methods for task planning.
Our approach complements prompt engineering and fine-tuning techniques, with performance further enhanced by improved prompts or a fine-tuned model.
arXiv Detail & Related papers (2024-05-29T14:26:24Z) - Generalized Graph Prompt: Toward a Unification of Pre-Training and Downstream Tasks on Graphs [20.406549548630156]
GraphPrompt is a novel pre-training and prompting framework on graphs.
It unifies pre-training and downstream tasks into a common task template.
It also employs a learnable prompt to assist a downstream task in locating the most relevant knowledge from the pre-trained model.
arXiv Detail & Related papers (2023-11-26T14:35:28Z) - One for All: Towards Training One Graph Model for All Classification Tasks [61.656962278497225]
A unified model for various graph tasks remains underexplored, primarily due to the challenges unique to the graph learning domain.
We propose textbfOne for All (OFA), the first general framework that can use a single graph model to address the above challenges.
OFA performs well across different tasks, making it the first general-purpose across-domains classification model on graphs.
arXiv Detail & Related papers (2023-09-29T21:15:26Z) - Deep Prompt Tuning for Graph Transformers [55.2480439325792]
Fine-tuning is resource-intensive and requires storing multiple copies of large models.
We propose a novel approach called deep graph prompt tuning as an alternative to fine-tuning.
By freezing the pre-trained parameters and only updating the added tokens, our approach reduces the number of free parameters and eliminates the need for multiple model copies.
arXiv Detail & Related papers (2023-09-18T20:12:17Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - All in One: Multi-task Prompting for Graph Neural Networks [30.457491401821652]
We propose a novel multi-task prompting method for graph models.
We first unify the format of graph prompts and language prompts with the prompt token, token structure, and inserting pattern.
We then study the task space of various graph applications and reformulate downstream problems to the graph-level task.
arXiv Detail & Related papers (2023-07-04T06:27:31Z) - GraphPrompt: Unifying Pre-Training and Downstream Tasks for Graph Neural
Networks [16.455234748896157]
GraphPrompt is a novel pre-training and prompting framework on graphs.
It unifies pre-training and downstream tasks into a common task template.
It also employs a learnable prompt to assist a downstream task in locating the most relevant knowledge from the pre-train model.
arXiv Detail & Related papers (2023-02-16T02:51:38Z) - DOTIN: Dropping Task-Irrelevant Nodes for GNNs [119.17997089267124]
Recent graph learning approaches have introduced the pooling strategy to reduce the size of graphs for learning.
We design a new approach called DOTIN (underlineDrunderlineopping underlineTask-underlineIrrelevant underlineNodes) to reduce the size of graphs.
Our method speeds up GAT by about 50% on graph-level tasks including graph classification and graph edit distance.
arXiv Detail & Related papers (2022-04-28T12:00:39Z) - Graph Pooling for Graph Neural Networks: Progress, Challenges, and
Opportunities [128.55790219377315]
Graph neural networks have emerged as a leading architecture for many graph-level tasks.
graph pooling is indispensable for obtaining a holistic graph-level representation of the whole graph.
arXiv Detail & Related papers (2022-04-15T04:02:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.