Rethinking Propagation for Unsupervised Graph Domain Adaptation
- URL: http://arxiv.org/abs/2402.05660v1
- Date: Thu, 8 Feb 2024 13:24:57 GMT
- Title: Rethinking Propagation for Unsupervised Graph Domain Adaptation
- Authors: Meihan Liu, Zeyu Fang, Zhen Zhang, Ming Gu, Sheng Zhou, Xin Wang,
Jiajun Bu
- Abstract summary: Unlabelled Graph Domain Adaptation (UGDA) aims to transfer knowledge from a labelled source graph to an unsupervised target graph.
We propose a simple yet effective approach called A2GNN for graph domain adaptation.
- Score: 17.443218657417454
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised Graph Domain Adaptation (UGDA) aims to transfer knowledge from a
labelled source graph to an unlabelled target graph in order to address the
distribution shifts between graph domains. Previous works have primarily
focused on aligning data from the source and target graph in the representation
space learned by graph neural networks (GNNs). However, the inherent
generalization capability of GNNs has been largely overlooked. Motivated by our
empirical analysis, we reevaluate the role of GNNs in graph domain adaptation
and uncover the pivotal role of the propagation process in GNNs for adapting to
different graph domains. We provide a comprehensive theoretical analysis of
UGDA and derive a generalization bound for multi-layer GNNs. By formulating GNN
Lipschitz for k-layer GNNs, we show that the target risk bound can be tighter
by removing propagation layers in source graph and stacking multiple
propagation layers in target graph. Based on the empirical and theoretical
analysis mentioned above, we propose a simple yet effective approach called
A2GNN for graph domain adaptation. Through extensive experiments on real-world
datasets, we demonstrate the effectiveness of our proposed A2GNN framework.
Related papers
- A Manifold Perspective on the Statistical Generalization of Graph Neural Networks [84.01980526069075]
Graph Neural Networks (GNNs) combine information from adjacent nodes by successive applications of graph convolutions.
We study the generalization gaps of GNNs on both node-level and graph-level tasks.
We show that the generalization gaps decrease with the number of nodes in the training graphs.
arXiv Detail & Related papers (2024-06-07T19:25:02Z) - Learning to Reweight for Graph Neural Network [63.978102332612906]
Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
arXiv Detail & Related papers (2023-12-19T12:25:10Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Edge Directionality Improves Learning on Heterophilic Graphs [42.5099159786891]
We introduce Directed Graph Neural Network (Dir-GNN), a novel framework for deep learning on directed graphs.
Dir-GNN can be used to extend any Message Passing Neural Network (MPNN) to account for edge directionality information.
We prove that Dir-GNN matches the expressivity of the Directed Weisfeiler-Lehman test, exceeding that of conventional MPNNs.
arXiv Detail & Related papers (2023-05-17T18:06:43Z) - Relation Embedding based Graph Neural Networks for Handling
Heterogeneous Graph [58.99478502486377]
We propose a simple yet efficient framework to make the homogeneous GNNs have adequate ability to handle heterogeneous graphs.
Specifically, we propose Relation Embedding based Graph Neural Networks (RE-GNNs), which employ only one parameter per relation to embed the importance of edge type relations and self-loop connections.
arXiv Detail & Related papers (2022-09-23T05:24:18Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Distribution Preserving Graph Representation Learning [11.340722297341788]
Graph neural network (GNN) is effective to model graphs for distributed representations of nodes and an entire graph.
We propose Distribution Preserving GNN (DP-GNN) - a GNN framework that can improve the generalizability of expressive GNN models.
We evaluate the proposed DP-GNN framework on multiple benchmark datasets for graph classification tasks.
arXiv Detail & Related papers (2022-02-27T19:16:26Z) - A Unified Lottery Ticket Hypothesis for Graph Neural Networks [82.31087406264437]
We present a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights.
We further generalize the popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network.
arXiv Detail & Related papers (2021-02-12T21:52:43Z) - Transfer Learning of Graph Neural Networks with Ego-graph Information
Maximization [41.867290324754094]
Graph neural networks (GNNs) have achieved superior performance in various applications, but training dedicated GNNs can be costly for large-scale graphs.
In this work, we establish a theoretically grounded and practically useful framework for the transfer learning of GNNs.
arXiv Detail & Related papers (2020-09-11T02:31:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.