Non-IID Transfer Learning on Graphs
- URL: http://arxiv.org/abs/2212.08174v1
- Date: Thu, 15 Dec 2022 22:29:29 GMT
- Title: Non-IID Transfer Learning on Graphs
- Authors: Jun Wu, Jingrui He, Elizabeth Ainsworth
- Abstract summary: Transfer learning refers to the transfer of knowledge from a relevant source domain to a target domain.
We propose rigorous generalization bounds and algorithms for cross-network transfer learning from a source graph to a target graph.
- Score: 35.84135001172101
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transfer learning refers to the transfer of knowledge or information from a
relevant source domain to a target domain. However, most existing transfer
learning theories and algorithms focus on IID tasks, where the source/target
samples are assumed to be independent and identically distributed. Very little
effort is devoted to theoretically studying the knowledge transferability on
non-IID tasks, e.g., cross-network mining. To bridge the gap, in this paper, we
propose rigorous generalization bounds and algorithms for cross-network
transfer learning from a source graph to a target graph. The crucial idea is to
characterize the cross-network knowledge transferability from the perspective
of the Weisfeiler-Lehman graph isomorphism test. To this end, we propose a
novel Graph Subtree Discrepancy to measure the graph distribution shift between
source and target graphs. Then the generalization error bounds on cross-network
transfer learning, including both cross-network node classification and link
prediction tasks, can be derived in terms of the source knowledge and the Graph
Subtree Discrepancy across domains. This thereby motivates us to propose a
generic graph adaptive network (GRADE) to minimize the distribution shift
between source and target graphs for cross-network transfer learning.
Experimental results verify the effectiveness and efficiency of our GRADE
framework on both cross-network node classification and cross-domain
recommendation tasks.
Related papers
- DELTA: Dual Consistency Delving with Topological Uncertainty for Active Graph Domain Adaptation [14.61592658071535]
We study the problem of active graph domain adaptation, which selects a small quantitative of informative nodes on the target graph for extra annotation.
This problem is highly challenging due to the complicated topological relationships and the distribution discrepancy across graphs.
We propose a novel approach named Dual Consistency Delving with Topological Uncertainty (DELTA) for active graph domain adaptation.
arXiv Detail & Related papers (2024-09-13T16:06:18Z) - Multi-source Unsupervised Domain Adaptation on Graphs with Transferability Modeling [35.39202826643388]
We present the framework Selective Multi-source Adaptation for Graph (method), with a graph-modeling-based domain selector, a sub-graph node selector, and a bi-level alignment objective.
Results on five graph datasets show the effectiveness of the proposed method.
arXiv Detail & Related papers (2024-06-14T22:05:21Z) - GNN-LoFI: a Novel Graph Neural Network through Localized Feature-based
Histogram Intersection [51.608147732998994]
Graph neural networks are increasingly becoming the framework of choice for graph-based machine learning.
We propose a new graph neural network architecture that substitutes classical message passing with an analysis of the local distribution of node features.
arXiv Detail & Related papers (2024-01-17T13:04:23Z) - Domain Adaptive Graph Classification [0.0]
We introduce the Dual Adversarial Graph Representation Learning (DAGRL), which explore the graph topology from dual branches and mitigate domain discrepancies via dual adversarial learning.
Our approach incorporates adaptive perturbations into the dual branches, which align the source and target distribution to address domain discrepancies.
arXiv Detail & Related papers (2023-12-21T02:37:56Z) - Domain-adaptive Message Passing Graph Neural Network [67.35534058138387]
Cross-network node classification (CNNC) aims to classify nodes in a label-deficient target network by transferring the knowledge from a source network with abundant labels.
We propose a domain-adaptive message passing graph neural network (DM-GNN), which integrates graph neural network (GNN) with conditional adversarial domain adaptation.
arXiv Detail & Related papers (2023-08-31T05:26:08Z) - You Only Transfer What You Share: Intersection-Induced Graph Transfer
Learning for Link Prediction [79.15394378571132]
We investigate a previously overlooked phenomenon: in many cases, a densely connected, complementary graph can be found for the original graph.
The denser graph may share nodes with the original graph, which offers a natural bridge for transferring selective, meaningful knowledge.
We identify this setting as Graph Intersection-induced Transfer Learning (GITL), which is motivated by practical applications in e-commerce or academic co-authorship predictions.
arXiv Detail & Related papers (2023-02-27T22:56:06Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z) - Tensor Graph Convolutional Networks for Multi-relational and Robust
Learning [74.05478502080658]
This paper introduces a tensor-graph convolutional network (TGCN) for scalable semi-supervised learning (SSL) from data associated with a collection of graphs, that are represented by a tensor.
The proposed architecture achieves markedly improved performance relative to standard GCNs, copes with state-of-the-art adversarial attacks, and leads to remarkable SSL performance over protein-to-protein interaction networks.
arXiv Detail & Related papers (2020-03-15T02:33:21Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.