ULTRA-DP: Unifying Graph Pre-training with Multi-task Graph Dual Prompt
- URL: http://arxiv.org/abs/2310.14845v2
- Date: Sun, 17 Dec 2023 08:16:44 GMT
- Title: ULTRA-DP: Unifying Graph Pre-training with Multi-task Graph Dual Prompt
- Authors: Mouxiang Chen, Zemin Liu, Chenghao Liu, Jundong Li, Qiheng Mao,
Jianling Sun
- Abstract summary: We propose a unified framework for graph hybrid pre-training which injects the task identification and position identification into GNNs.
We also propose a novel pre-training paradigm based on a group of $k$-nearest neighbors.
- Score: 67.8934749027315
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent research has demonstrated the efficacy of pre-training graph neural
networks (GNNs) to capture the transferable graph semantics and enhance the
performance of various downstream tasks. However, the semantic knowledge
learned from pretext tasks might be unrelated to the downstream task, leading
to a semantic gap that limits the application of graph pre-training. To reduce
this gap, traditional approaches propose hybrid pre-training to combine various
pretext tasks together in a multi-task learning fashion and learn multi-grained
knowledge, which, however, cannot distinguish tasks and results in some
transferable task-specific knowledge distortion by each other. Moreover, most
GNNs cannot distinguish nodes located in different parts of the graph, making
them fail to learn position-specific knowledge and lead to suboptimal
performance. In this work, inspired by the prompt-based tuning in natural
language processing, we propose a unified framework for graph hybrid
pre-training which injects the task identification and position identification
into GNNs through a prompt mechanism, namely multi-task graph dual prompt
(ULTRA-DP). Based on this framework, we propose a prompt-based transferability
test to find the most relevant pretext task in order to reduce the semantic
gap. To implement the hybrid pre-training tasks, beyond the classical edge
prediction task (node-node level), we further propose a novel pre-training
paradigm based on a group of $k$-nearest neighbors (node-group level). The
combination of them across different scales is able to comprehensively express
more structural semantics and derive richer multi-grained knowledge. Extensive
experiments show that our proposed ULTRA-DP can significantly enhance the
performance of hybrid pre-training methods and show the generalizability to
other pre-training tasks and backbone architectures.
Related papers
- Instance-Aware Graph Prompt Learning [71.26108600288308]
We introduce Instance-Aware Graph Prompt Learning (IA-GPL) in this paper.
The process involves generating intermediate prompts for each instance using a lightweight architecture.
Experiments conducted on multiple datasets and settings showcase the superior performance of IA-GPL compared to state-of-the-art baselines.
arXiv Detail & Related papers (2024-11-26T18:38:38Z) - Can Graph Learning Improve Planning in LLM-based Agents? [61.47027387839096]
Task planning in language agents is emerging as an important research topic alongside the development of large language models (LLMs)
In this paper, we explore graph learning-based methods for task planning, a direction that is to the prevalent focus on prompt design.
Our interest in graph learning stems from a theoretical discovery: the biases of attention and auto-regressive loss impede LLMs' ability to effectively navigate decision-making on graphs.
arXiv Detail & Related papers (2024-05-29T14:26:24Z) - MultiGPrompt for Multi-Task Pre-Training and Prompting on Graphs [33.2696184519275]
MultiGPrompt is a novel multi-task pre-training and prompting framework for graph representation learning.
We propose a dual-prompt mechanism consisting of composed and open prompts to leverage task-specific and global pre-training knowledge.
arXiv Detail & Related papers (2023-11-28T02:36:53Z) - HetGPT: Harnessing the Power of Prompt Tuning in Pre-Trained
Heterogeneous Graph Neural Networks [24.435068514392487]
HetGPT is a post-training prompting framework for graph neural networks.
It improves the performance of state-of-the-art HGNNs on semi-supervised node classification.
arXiv Detail & Related papers (2023-10-23T19:35:57Z) - All in One: Multi-task Prompting for Graph Neural Networks [30.457491401821652]
We propose a novel multi-task prompting method for graph models.
We first unify the format of graph prompts and language prompts with the prompt token, token structure, and inserting pattern.
We then study the task space of various graph applications and reformulate downstream problems to the graph-level task.
arXiv Detail & Related papers (2023-07-04T06:27:31Z) - Pre-training Multi-task Contrastive Learning Models for Scientific
Literature Understanding [52.723297744257536]
Pre-trained language models (LMs) have shown effectiveness in scientific literature understanding tasks.
We propose a multi-task contrastive learning framework, SciMult, to facilitate common knowledge sharing across different literature understanding tasks.
arXiv Detail & Related papers (2023-05-23T16:47:22Z) - GraphPrompt: Unifying Pre-Training and Downstream Tasks for Graph Neural
Networks [16.455234748896157]
GraphPrompt is a novel pre-training and prompting framework on graphs.
It unifies pre-training and downstream tasks into a common task template.
It also employs a learnable prompt to assist a downstream task in locating the most relevant knowledge from the pre-train model.
arXiv Detail & Related papers (2023-02-16T02:51:38Z) - Multi-task Self-supervised Graph Neural Networks Enable Stronger Task
Generalization [40.265515914447924]
Self-supervised learning (SSL) for graph neural networks (GNNs) has attracted increasing attention from the machine learning community in recent years.
One weakness of conventional SSL frameworks for GNNs is that they learn through a single philosophy.
arXiv Detail & Related papers (2022-10-05T04:09:38Z) - Fast Inference and Transfer of Compositional Task Structures for
Few-shot Task Generalization [101.72755769194677]
We formulate it as a few-shot reinforcement learning problem where a task is characterized by a subtask graph.
Our multi-task subtask graph inferencer (MTSGI) first infers the common high-level task structure in terms of the subtask graph from the training tasks.
Our experiment results on 2D grid-world and complex web navigation domains show that the proposed method can learn and leverage the common underlying structure of the tasks for faster adaptation to the unseen tasks.
arXiv Detail & Related papers (2022-05-25T10:44:25Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.