Transfer Learning of Graph Neural Networks with Ego-graph Information
Maximization
- URL: http://arxiv.org/abs/2009.05204v2
- Date: Tue, 26 Oct 2021 17:50:08 GMT
- Title: Transfer Learning of Graph Neural Networks with Ego-graph Information
Maximization
- Authors: Qi Zhu, Carl Yang, Yidan Xu, Haonan Wang, Chao Zhang, Jiawei Han
- Abstract summary: Graph neural networks (GNNs) have achieved superior performance in various applications, but training dedicated GNNs can be costly for large-scale graphs.
In this work, we establish a theoretically grounded and practically useful framework for the transfer learning of GNNs.
- Score: 41.867290324754094
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) have achieved superior performance in various
applications, but training dedicated GNNs can be costly for large-scale graphs.
Some recent work started to study the pre-training of GNNs. However, none of
them provide theoretical insights into the design of their frameworks, or clear
requirements and guarantees towards their transferability. In this work, we
establish a theoretically grounded and practically useful framework for the
transfer learning of GNNs. Firstly, we propose a novel view towards the
essential graph information and advocate the capturing of it as the goal of
transferable GNN training, which motivates the design of EGI (Ego-Graph
Information maximization) to analytically achieve this goal. Secondly, when
node features are structure-relevant, we conduct an analysis of EGI
transferability regarding the difference between the local graph Laplacians of
the source and target graphs. We conduct controlled synthetic experiments to
directly justify our theoretical conclusions. Comprehensive experiments on two
real-world network datasets show consistent results in the analyzed setting of
direct-transfering, while those on large-scale knowledge graphs show promising
results in the more practical setting of transfering with fine-tuning.
Related papers
- GraphLoRA: Structure-Aware Contrastive Low-Rank Adaptation for Cross-Graph Transfer Learning [17.85404473268992]
Graph Neural Networks (GNNs) have demonstrated remarkable proficiency in handling a range of graph analytical tasks.
Despite their versatility, GNNs face significant challenges in transferability, limiting their utility in real-world applications.
We propose GraphLoRA, an effective and parameter-efficient method for transferring well-trained GNNs to diverse graph domains.
arXiv Detail & Related papers (2024-09-25T06:57:42Z) - Rethinking Propagation for Unsupervised Graph Domain Adaptation [17.443218657417454]
Unlabelled Graph Domain Adaptation (UGDA) aims to transfer knowledge from a labelled source graph to an unsupervised target graph.
We propose a simple yet effective approach called A2GNN for graph domain adaptation.
arXiv Detail & Related papers (2024-02-08T13:24:57Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - An Empirical Study of Retrieval-enhanced Graph Neural Networks [48.99347386689936]
Graph Neural Networks (GNNs) are effective tools for graph representation learning.
We propose a retrieval-enhanced scheme called GRAPHRETRIEVAL, which is agnostic to the choice of graph neural network models.
We conduct comprehensive experiments over 13 datasets, and we observe that GRAPHRETRIEVAL is able to reach substantial improvements over existing GNNs.
arXiv Detail & Related papers (2022-06-01T09:59:09Z) - Investigating Transfer Learning in Graph Neural Networks [2.320417845168326]
Graph neural networks (GNNs) build on the success of deep learning models by extending them for use in graph spaces.
transfer learning has proven extremely successful for traditional deep learning problems: resulting in faster training and improved performance.
This research demonstrates that transfer learning is effective with GNNs, and describes how source tasks and the choice of GNN impact the ability to learn generalisable knowledge.
arXiv Detail & Related papers (2022-02-01T20:33:15Z) - An Analysis of Attentive Walk-Aggregating Graph Neural Networks [34.866935881726256]
Graph neural networks (GNNs) have been shown to possess strong representation power.
We propose a novel GNN model, called AWARE, that aggregates information about the walks in the graph using attention schemes.
arXiv Detail & Related papers (2021-10-06T11:41:12Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z) - Node Masking: Making Graph Neural Networks Generalize and Scale Better [71.51292866945471]
Graph Neural Networks (GNNs) have received a lot of interest in the recent times.
In this paper, we utilize some theoretical tools to better visualize the operations performed by state of the art spatial GNNs.
We introduce a simple concept, Node Masking, that allows them to generalize and scale better.
arXiv Detail & Related papers (2020-01-17T06:26:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.