All in One: Multi-Task Prompting for Graph Neural Networks (Extended
Abstract)
- URL: http://arxiv.org/abs/2403.07040v1
- Date: Mon, 11 Mar 2024 16:04:58 GMT
- Title: All in One: Multi-Task Prompting for Graph Neural Networks (Extended
Abstract)
- Authors: Xiangguo Sun, Hong Cheng, Jia Li, Bo Liu, Jihong Guan
- Abstract summary: This paper is an extended abstract of our original work published in KDD23, where we won the best research paper award.
It introduces a novel approach to bridging the gap between pre-trained graph models and the diverse tasks they're applied to.
- Score: 30.457491401821652
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper is an extended abstract of our original work published in KDD23,
where we won the best research paper award (Xiangguo Sun, Hong Cheng, Jia Li,
Bo Liu, and Jihong Guan. All in one: Multi-task prompting for graph neural
networks. KDD 23) The paper introduces a novel approach to bridging the gap
between pre-trained graph models and the diverse tasks they're applied to,
inspired by the success of prompt learning in NLP. Recognizing the challenge of
aligning pre-trained models with varied graph tasks (node level, edge level,
and graph level), which can lead to negative transfer and poor performance, we
propose a multi-task prompting method for graphs. This method involves unifying
graph and language prompt formats, enabling NLP's prompting strategies to be
adapted for graph tasks. By analyzing the task space of graph applications, we
reformulate problems to fit graph-level tasks and apply meta-learning to
improve prompt initialization for multiple tasks. Experiments show our method's
effectiveness in enhancing model performance across different graph tasks.
Beyond the original work, in this extended abstract, we further discuss the
graph prompt from a bigger picture and provide some of the latest work toward
this area.
Related papers
- Instance-Aware Graph Prompt Learning [71.26108600288308]
We introduce Instance-Aware Graph Prompt Learning (IA-GPL) in this paper.
The process involves generating intermediate prompts for each instance using a lightweight architecture.
Experiments conducted on multiple datasets and settings showcase the superior performance of IA-GPL compared to state-of-the-art baselines.
arXiv Detail & Related papers (2024-11-26T18:38:38Z) - Replay-and-Forget-Free Graph Class-Incremental Learning: A Task Profiling and Prompting Approach [28.194940062243003]
Class-incremental learning (CIL) aims to continually learn a sequence of tasks, with each task consisting of a set of unique classes.
The key characteristic of CIL lies in the absence of task identifiers (IDs) during inference.
We show theoretically that accurate task ID prediction on graph data can be achieved by a Laplacian smoothing-based graph task profiling approach.
arXiv Detail & Related papers (2024-10-14T09:54:20Z) - Can Graph Learning Improve Planning in LLM-based Agents? [61.47027387839096]
Task planning in language agents is emerging as an important research topic alongside the development of large language models (LLMs)
In this paper, we explore graph learning-based methods for task planning, a direction that is to the prevalent focus on prompt design.
Our interest in graph learning stems from a theoretical discovery: the biases of attention and auto-regressive loss impede LLMs' ability to effectively navigate decision-making on graphs.
arXiv Detail & Related papers (2024-05-29T14:26:24Z) - Generalized Graph Prompt: Toward a Unification of Pre-Training and Downstream Tasks on Graphs [20.406549548630156]
GraphPrompt is a novel pre-training and prompting framework on graphs.
It unifies pre-training and downstream tasks into a common task template.
It also employs a learnable prompt to assist a downstream task in locating the most relevant knowledge from the pre-trained model.
arXiv Detail & Related papers (2023-11-26T14:35:28Z) - One for All: Towards Training One Graph Model for All Classification Tasks [61.656962278497225]
A unified model for various graph tasks remains underexplored, primarily due to the challenges unique to the graph learning domain.
We propose textbfOne for All (OFA), the first general framework that can use a single graph model to address the above challenges.
OFA performs well across different tasks, making it the first general-purpose across-domains classification model on graphs.
arXiv Detail & Related papers (2023-09-29T21:15:26Z) - Deep Prompt Tuning for Graph Transformers [55.2480439325792]
Fine-tuning is resource-intensive and requires storing multiple copies of large models.
We propose a novel approach called deep graph prompt tuning as an alternative to fine-tuning.
By freezing the pre-trained parameters and only updating the added tokens, our approach reduces the number of free parameters and eliminates the need for multiple model copies.
arXiv Detail & Related papers (2023-09-18T20:12:17Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - All in One: Multi-task Prompting for Graph Neural Networks [30.457491401821652]
We propose a novel multi-task prompting method for graph models.
We first unify the format of graph prompts and language prompts with the prompt token, token structure, and inserting pattern.
We then study the task space of various graph applications and reformulate downstream problems to the graph-level task.
arXiv Detail & Related papers (2023-07-04T06:27:31Z) - GraphPrompt: Unifying Pre-Training and Downstream Tasks for Graph Neural
Networks [16.455234748896157]
GraphPrompt is a novel pre-training and prompting framework on graphs.
It unifies pre-training and downstream tasks into a common task template.
It also employs a learnable prompt to assist a downstream task in locating the most relevant knowledge from the pre-train model.
arXiv Detail & Related papers (2023-02-16T02:51:38Z) - Graph Pooling for Graph Neural Networks: Progress, Challenges, and
Opportunities [128.55790219377315]
Graph neural networks have emerged as a leading architecture for many graph-level tasks.
graph pooling is indispensable for obtaining a holistic graph-level representation of the whole graph.
arXiv Detail & Related papers (2022-04-15T04:02:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.