Inductive Graph Alignment Prompt: Bridging the Gap between Graph
Pre-training and Inductive Fine-tuning From Spectral Perspective
- URL: http://arxiv.org/abs/2402.13556v1
- Date: Wed, 21 Feb 2024 06:25:54 GMT
- Title: Inductive Graph Alignment Prompt: Bridging the Gap between Graph
Pre-training and Inductive Fine-tuning From Spectral Perspective
- Authors: Yuchen Yan, Peiyan Zhang, Zheng Fang, Qingqing Long
- Abstract summary: "Graph pre-training and fine-tuning" paradigm has significantly improved Graph Neural Networks(GNNs)
However, due to the immense gap of data and tasks between the pre-training and fine-tuning stages, the model performance is still limited.
We propose a novel graph prompt based method called Inductive Graph Alignment Prompt(IGAP)
- Score: 13.277779426525056
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The "Graph pre-training and fine-tuning" paradigm has significantly improved
Graph Neural Networks(GNNs) by capturing general knowledge without manual
annotations for downstream tasks. However, due to the immense gap of data and
tasks between the pre-training and fine-tuning stages, the model performance is
still limited. Inspired by prompt fine-tuning in Natural Language
Processing(NLP), many endeavors have been made to bridge the gap in graph
domain. But existing methods simply reformulate the form of fine-tuning tasks
to the pre-training ones. With the premise that the pre-training graphs are
compatible with the fine-tuning ones, these methods typically operate in
transductive setting. In order to generalize graph pre-training to inductive
scenario where the fine-tuning graphs might significantly differ from
pre-training ones, we propose a novel graph prompt based method called
Inductive Graph Alignment Prompt(IGAP). Firstly, we unify the mainstream graph
pre-training frameworks and analyze the essence of graph pre-training from
graph spectral theory. Then we identify the two sources of the data gap in
inductive setting: (i) graph signal gap and (ii) graph structure gap. Based on
the insight of graph pre-training, we propose to bridge the graph signal gap
and the graph structure gap with learnable prompts in the spectral space. A
theoretical analysis ensures the effectiveness of our method. At last, we
conduct extensive experiments among nodes classification and graph
classification tasks under the transductive, semi-inductive and inductive
settings. The results demonstrate that our proposed method can successfully
bridge the data gap under different settings.
Related papers
- Does Graph Prompt Work? A Data Operation Perspective with Theoretical Analysis [7.309233340654514]
This paper introduces a theoretical framework that rigorously analyzes graph prompting from a data operation perspective.
We provide a formal guarantee theorem, demonstrating graph prompts capacity to approximate graph transformation operators.
We derive upper bounds on the error of these data operations by graph prompts for a single graph and extend this discussion to batches of graphs.
arXiv Detail & Related papers (2024-10-02T15:07:13Z) - Fine-tuning Graph Neural Networks by Preserving Graph Generative
Patterns [13.378277755978258]
We show that the structural divergence between pre-training and downstream graphs significantly limits the transferability when using the vanilla fine-tuning strategy.
We propose G-Tuning to preserve the generative patterns of downstream graphs.
G-Tuning demonstrates an average improvement of 0.5% and 2.6% on in-domain and out-of-domain transfer learning experiments.
arXiv Detail & Related papers (2023-12-21T05:17:10Z) - Self-Pro: A Self-Prompt and Tuning Framework for Graph Neural Networks [10.794305560114903]
Self-Prompt is a prompting framework for graphs based on the model and data itself.
We introduce asymmetric graph contrastive learning for pretext to address heterophily and align the objectives of pretext and downstream tasks.
We conduct extensive experiments on 11 benchmark datasets to demonstrate its superiority.
arXiv Detail & Related papers (2023-10-16T12:58:04Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - You Only Transfer What You Share: Intersection-Induced Graph Transfer
Learning for Link Prediction [79.15394378571132]
We investigate a previously overlooked phenomenon: in many cases, a densely connected, complementary graph can be found for the original graph.
The denser graph may share nodes with the original graph, which offers a natural bridge for transferring selective, meaningful knowledge.
We identify this setting as Graph Intersection-induced Transfer Learning (GITL), which is motivated by practical applications in e-commerce or academic co-authorship predictions.
arXiv Detail & Related papers (2023-02-27T22:56:06Z) - Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Bringing Your Own View: Graph Contrastive Learning without Prefabricated
Data Augmentations [94.41860307845812]
Self-supervision is recently surging at its new frontier of graph learning.
GraphCL uses a prefabricated prior reflected by the ad-hoc manual selection of graph data augmentations.
We have extended the prefabricated discrete prior in the augmentation set, to a learnable continuous prior in the parameter space of graph generators.
We have leveraged both principles of information minimization (InfoMin) and information bottleneck (InfoBN) to regularize the learned priors.
arXiv Detail & Related papers (2022-01-04T15:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.