HetGPT: Harnessing the Power of Prompt Tuning in Pre-Trained
Heterogeneous Graph Neural Networks
- URL: http://arxiv.org/abs/2310.15318v3
- Date: Tue, 23 Jan 2024 18:27:30 GMT
- Title: HetGPT: Harnessing the Power of Prompt Tuning in Pre-Trained
Heterogeneous Graph Neural Networks
- Authors: Yihong Ma, Ning Yan, Jiayu Li, Masood Mortazavi and Nitesh V. Chawla
- Abstract summary: HetGPT is a post-training prompting framework for graph neural networks.
It improves the performance of state-of-the-art HGNNs on semi-supervised node classification.
- Score: 24.435068514392487
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graphs have emerged as a natural choice to represent and analyze the
intricate patterns and rich information of the Web, enabling applications such
as online page classification and social recommendation. The prevailing
"pre-train, fine-tune" paradigm has been widely adopted in graph machine
learning tasks, particularly in scenarios with limited labeled nodes. However,
this approach often exhibits a misalignment between the training objectives of
pretext tasks and those of downstream tasks. This gap can result in the
"negative transfer" problem, wherein the knowledge gained from pre-training
adversely affects performance in the downstream tasks. The surge in
prompt-based learning within Natural Language Processing (NLP) suggests the
potential of adapting a "pre-train, prompt" paradigm to graphs as an
alternative. However, existing graph prompting techniques are tailored to
homogeneous graphs, neglecting the inherent heterogeneity of Web graphs. To
bridge this gap, we propose HetGPT, a general post-training prompting framework
to improve the predictive performance of pre-trained heterogeneous graph neural
networks (HGNNs). The key is the design of a novel prompting function that
integrates a virtual class prompt and a heterogeneous feature prompt, with the
aim to reformulate downstream tasks to mirror pretext tasks. Moreover, HetGPT
introduces a multi-view neighborhood aggregation mechanism, capturing the
complex neighborhood structure in heterogeneous graphs. Extensive experiments
on three benchmark datasets demonstrate HetGPT's capability to enhance the
performance of state-of-the-art HGNNs on semi-supervised node classification.
Related papers
- Instance-Aware Graph Prompt Learning [71.26108600288308]
We introduce Instance-Aware Graph Prompt Learning (IA-GPL) in this paper.
The process involves generating intermediate prompts for each instance using a lightweight architecture.
Experiments conducted on multiple datasets and settings showcase the superior performance of IA-GPL compared to state-of-the-art baselines.
arXiv Detail & Related papers (2024-11-26T18:38:38Z) - ULTRA-DP: Unifying Graph Pre-training with Multi-task Graph Dual Prompt [67.8934749027315]
We propose a unified framework for graph hybrid pre-training which injects the task identification and position identification into GNNs.
We also propose a novel pre-training paradigm based on a group of $k$-nearest neighbors.
arXiv Detail & Related papers (2023-10-23T12:11:13Z) - TouchUp-G: Improving Feature Representation through Graph-Centric
Finetuning [37.318961625795204]
Graph Neural Networks (GNNs) have become the state-of-the-art approach for many high-impact, real-world graph applications.
For feature-rich graphs, a prevalent practice involves utilizing a PM directly to generate features.
This practice is suboptimal because the node features extracted from PM are graph-agnostic and prevent GNNs from fully utilizing the potential correlations between the graph structure and node features.
arXiv Detail & Related papers (2023-09-25T05:44:40Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - GraphPrompt: Unifying Pre-Training and Downstream Tasks for Graph Neural
Networks [16.455234748896157]
GraphPrompt is a novel pre-training and prompting framework on graphs.
It unifies pre-training and downstream tasks into a common task template.
It also employs a learnable prompt to assist a downstream task in locating the most relevant knowledge from the pre-train model.
arXiv Detail & Related papers (2023-02-16T02:51:38Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.