Neural Graph Matching for Pre-training Graph Neural Networks
- URL: http://arxiv.org/abs/2203.01597v1
- Date: Thu, 3 Mar 2022 09:53:53 GMT
- Title: Neural Graph Matching for Pre-training Graph Neural Networks
- Authors: Yupeng Hou, Binbin Hu, Wayne Xin Zhao, Zhiqiang Zhang, Jun Zhou,
Ji-Rong Wen
- Abstract summary: Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
- Score: 72.32801428070749
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, graph neural networks (GNNs) have been shown powerful capacity at
modeling structural data. However, when adapted to downstream tasks, it usually
requires abundant task-specific labeled data, which can be extremely scarce in
practice. A promising solution to data scarcity is to pre-train a transferable
and expressive GNN model on large amounts of unlabeled graphs or coarse-grained
labeled graphs. Then the pre-trained GNN is fine-tuned on downstream datasets
with task-specific fine-grained labels. In this paper, we present a novel Graph
Matching based GNN Pre-Training framework, called GMPT. Focusing on a pair of
graphs, we propose to learn structural correspondences between them via neural
graph matching, consisting of both intra-graph message passing and inter-graph
message passing. In this way, we can learn adaptive representations for a given
graph when paired with different graphs, and both node- and graph-level
characteristics are naturally considered in a single pre-training task. The
proposed method can be applied to fully self-supervised pre-training and
coarse-grained supervised pre-training. We further propose an approximate
contrastive training strategy to significantly reduce time/memory consumption.
Extensive experiments on multi-domain, out-of-distribution benchmarks have
demonstrated the effectiveness of our approach. The code is available at:
https://github.com/RUCAIBox/GMPT.
Related papers
- You do not have to train Graph Neural Networks at all on text-attributed graphs [25.044734252779975]
We introduce TrainlessGNN, a linear GNN model capitalizing on the observation that text encodings from the same class often cluster together in a linear subspace.
Our experiments reveal that our trainless models can either match or even surpass their conventionally trained counterparts.
arXiv Detail & Related papers (2024-04-17T02:52:11Z) - Self-Pro: A Self-Prompt and Tuning Framework for Graph Neural Networks [10.794305560114903]
Self-Prompt is a prompting framework for graphs based on the model and data itself.
We introduce asymmetric graph contrastive learning for pretext to address heterophily and align the objectives of pretext and downstream tasks.
We conduct extensive experiments on 11 benchmark datasets to demonstrate its superiority.
arXiv Detail & Related papers (2023-10-16T12:58:04Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - Similarity-aware Positive Instance Sampling for Graph Contrastive
Pre-training [82.68805025636165]
We propose to select positive graph instances directly from existing graphs in the training set.
Our selection is based on certain domain-specific pair-wise similarity measurements.
Besides, we develop an adaptive node-level pre-training method to dynamically mask nodes to distribute them evenly in the graph.
arXiv Detail & Related papers (2022-06-23T20:12:51Z) - Training Free Graph Neural Networks for Graph Matching [103.45755859119035]
TFGM is a framework to boost the performance of Graph Neural Networks (GNNs) based graph matching without training.
Applying TFGM on various GNNs shows promising improvements over baselines.
arXiv Detail & Related papers (2022-01-14T09:04:46Z) - Scalable Graph Neural Networks for Heterogeneous Graphs [12.44278942365518]
Graph neural networks (GNNs) are a popular class of parametric model for learning over graph-structured data.
Recent work has argued that GNNs primarily use the graph for feature smoothing, and have shown competitive results on benchmark tasks.
In this work, we ask whether these results can be extended to heterogeneous graphs, which encode multiple types of relationship between different entities.
arXiv Detail & Related papers (2020-11-19T06:03:35Z) - GPT-GNN: Generative Pre-Training of Graph Neural Networks [93.35945182085948]
Graph neural networks (GNNs) have been demonstrated to be powerful in modeling graph-structured data.
We present the GPT-GNN framework to initialize GNNs by generative pre-training.
We show that GPT-GNN significantly outperforms state-of-the-art GNN models without pre-training by up to 9.1% across various downstream tasks.
arXiv Detail & Related papers (2020-06-27T20:12:33Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.