MentorGNN: Deriving Curriculum for Pre-Training GNNs
- URL: http://arxiv.org/abs/2208.09905v1
- Date: Sun, 21 Aug 2022 15:12:08 GMT
- Title: MentorGNN: Deriving Curriculum for Pre-Training GNNs
- Authors: Dawei Zhou, Lecheng Zheng, Dongqi Fu, Jiawei Han, Jingrui He
- Abstract summary: We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
- Score: 61.97574489259085
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph pre-training strategies have been attracting a surge of attention in
the graph mining community, due to their flexibility in parameterizing graph
neural networks (GNNs) without any label information. The key idea lies in
encoding valuable information into the backbone GNNs, by predicting the masked
graph signals extracted from the input graphs. In order to balance the
importance of diverse graph signals (e.g., nodes, edges, subgraphs), the
existing approaches are mostly hand-engineered by introducing hyperparameters
to re-weight the importance of graph signals. However, human interventions with
sub-optimal hyperparameters often inject additional bias and deteriorate the
generalization performance in the downstream applications. This paper addresses
these limitations from a new perspective, i.e., deriving curriculum for
pre-training GNNs. We propose an end-to-end model named MentorGNN that aims to
supervise the pre-training process of GNNs across graphs with diverse
structures and disparate feature spaces. To comprehend heterogeneous graph
signals at different granularities, we propose a curriculum learning paradigm
that automatically re-weighs graph signals in order to ensure a good
generalization in the target domain. Moreover, we shed new light on the problem
of domain adaption on relational data (i.e., graphs) by deriving a natural and
interpretable upper bound on the generalization error of the pre-trained GNNs.
Extensive experiments on a wealth of real graphs validate and verify the
performance of MentorGNN.
Related papers
- GraphLoRA: Structure-Aware Contrastive Low-Rank Adaptation for Cross-Graph Transfer Learning [17.85404473268992]
Graph Neural Networks (GNNs) have demonstrated remarkable proficiency in handling a range of graph analytical tasks.
Despite their versatility, GNNs face significant challenges in transferability, limiting their utility in real-world applications.
We propose GraphLoRA, an effective and parameter-efficient method for transferring well-trained GNNs to diverse graph domains.
arXiv Detail & Related papers (2024-09-25T06:57:42Z) - Rethinking Propagation for Unsupervised Graph Domain Adaptation [17.443218657417454]
Unlabelled Graph Domain Adaptation (UGDA) aims to transfer knowledge from a labelled source graph to an unsupervised target graph.
We propose a simple yet effective approach called A2GNN for graph domain adaptation.
arXiv Detail & Related papers (2024-02-08T13:24:57Z) - Learning to Reweight for Graph Neural Network [63.978102332612906]
Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
arXiv Detail & Related papers (2023-12-19T12:25:10Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - GPT-GNN: Generative Pre-Training of Graph Neural Networks [93.35945182085948]
Graph neural networks (GNNs) have been demonstrated to be powerful in modeling graph-structured data.
We present the GPT-GNN framework to initialize GNNs by generative pre-training.
We show that GPT-GNN significantly outperforms state-of-the-art GNN models without pre-training by up to 9.1% across various downstream tasks.
arXiv Detail & Related papers (2020-06-27T20:12:33Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.