Rethinking Propagation for Unsupervised Graph Domain Adaptation
- URL: http://arxiv.org/abs/2402.05660v1
- Date: Thu, 8 Feb 2024 13:24:57 GMT
- Title: Rethinking Propagation for Unsupervised Graph Domain Adaptation
- Authors: Meihan Liu, Zeyu Fang, Zhen Zhang, Ming Gu, Sheng Zhou, Xin Wang,
Jiajun Bu
- Abstract summary: Unlabelled Graph Domain Adaptation (UGDA) aims to transfer knowledge from a labelled source graph to an unsupervised target graph.
We propose a simple yet effective approach called A2GNN for graph domain adaptation.
- Score: 17.443218657417454
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised Graph Domain Adaptation (UGDA) aims to transfer knowledge from a
labelled source graph to an unlabelled target graph in order to address the
distribution shifts between graph domains. Previous works have primarily
focused on aligning data from the source and target graph in the representation
space learned by graph neural networks (GNNs). However, the inherent
generalization capability of GNNs has been largely overlooked. Motivated by our
empirical analysis, we reevaluate the role of GNNs in graph domain adaptation
and uncover the pivotal role of the propagation process in GNNs for adapting to
different graph domains. We provide a comprehensive theoretical analysis of
UGDA and derive a generalization bound for multi-layer GNNs. By formulating GNN
Lipschitz for k-layer GNNs, we show that the target risk bound can be tighter
by removing propagation layers in source graph and stacking multiple
propagation layers in target graph. Based on the empirical and theoretical
analysis mentioned above, we propose a simple yet effective approach called
A2GNN for graph domain adaptation. Through extensive experiments on real-world
datasets, we demonstrate the effectiveness of our proposed A2GNN framework.
Related papers
- Domain Adaptive Unfolded Graph Neural Networks [6.675805308519987]
Graph neural networks (GNNs) have made significant progress in numerous graph machine learning tasks.
In this work, we consider how to facilitate graph domain adaptation (GDA) with architectural enhancement.
We propose a simple yet effective strategy called cascaded propagation (CP) which is guaranteed to decrease the lower-level objective value.
arXiv Detail & Related papers (2024-11-20T09:05:36Z) - GraphLoRA: Structure-Aware Contrastive Low-Rank Adaptation for Cross-Graph Transfer Learning [17.85404473268992]
Graph Neural Networks (GNNs) have demonstrated remarkable proficiency in handling a range of graph analytical tasks.
Despite their versatility, GNNs face significant challenges in transferability, limiting their utility in real-world applications.
We propose GraphLoRA, an effective and parameter-efficient method for transferring well-trained GNNs to diverse graph domains.
arXiv Detail & Related papers (2024-09-25T06:57:42Z) - A Manifold Perspective on the Statistical Generalization of Graph Neural Networks [84.01980526069075]
We take a manifold perspective to establish the statistical generalization theory of GNNs on graphs sampled from a manifold in the spectral domain.
We prove that the generalization bounds of GNNs decrease linearly with the size of the graphs in the logarithmic scale, and increase linearly with the spectral continuity constants of the filter functions.
arXiv Detail & Related papers (2024-06-07T19:25:02Z) - Learning to Reweight for Graph Neural Network [63.978102332612906]
Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
arXiv Detail & Related papers (2023-12-19T12:25:10Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Relation Embedding based Graph Neural Networks for Handling
Heterogeneous Graph [58.99478502486377]
We propose a simple yet efficient framework to make the homogeneous GNNs have adequate ability to handle heterogeneous graphs.
Specifically, we propose Relation Embedding based Graph Neural Networks (RE-GNNs), which employ only one parameter per relation to embed the importance of edge type relations and self-loop connections.
arXiv Detail & Related papers (2022-09-23T05:24:18Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Distribution Preserving Graph Representation Learning [11.340722297341788]
Graph neural network (GNN) is effective to model graphs for distributed representations of nodes and an entire graph.
We propose Distribution Preserving GNN (DP-GNN) - a GNN framework that can improve the generalizability of expressive GNN models.
We evaluate the proposed DP-GNN framework on multiple benchmark datasets for graph classification tasks.
arXiv Detail & Related papers (2022-02-27T19:16:26Z) - A Unified Lottery Ticket Hypothesis for Graph Neural Networks [82.31087406264437]
We present a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights.
We further generalize the popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network.
arXiv Detail & Related papers (2021-02-12T21:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.