Zero-shot Domain Adaptation of Heterogeneous Graphs via Knowledge
Transfer Networks
- URL: http://arxiv.org/abs/2203.02018v1
- Date: Thu, 3 Mar 2022 21:00:23 GMT
- Title: Zero-shot Domain Adaptation of Heterogeneous Graphs via Knowledge
Transfer Networks
- Authors: Minji Yoon, John Palowitch, Dustin Zelle, Ziniu Hu, Ruslan
Salakhutdinov, Bryan Perozzi
- Abstract summary: heterogeneous graph neural networks (HGNNs) have shown superior performance as powerful representation learning techniques.
There is no direct way to learn using labels rooted at different node types.
In this work, we propose a novel domain adaptation method, Knowledge Transfer Networks for HGNNs (HGNN-KTN)
- Score: 72.82524864001691
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: How can we make predictions for nodes in a heterogeneous graph when an entire
type of node (e.g. user) has no labels (perhaps due to privacy issues) at all?
Although heterogeneous graph neural networks (HGNNs) have shown superior
performance as powerful representation learning techniques, there is no direct
way to learn using labels rooted at different node types. Domain adaptation
(DA) targets this setting, however, existing DA can not be applied directly to
HGNNs. In heterogeneous graphs, the source and target domains have different
modalities, thus HGNNs provide different feature extractors to them, while most
of DA assumes source and target domains share a common feature extractor. In
this work, we address the issue of zero-shot domain adaptation in HGNNs. We
first theoretically induce a relationship between source and target domain
features extracted from HGNNs, then propose a novel domain adaptation method,
Knowledge Transfer Networks for HGNNs (HGNN-KTN). HGNN-KTN learns the
relationship between source and target features, then maps the target
distributions into the source domain. HGNN-KTN outperforms state-of-the-art
baselines, showing up to 73.3% higher in MRR on 18 different domain adaptation
tasks running on real-world benchmark graphs.
Related papers
- GraphLoRA: Structure-Aware Contrastive Low-Rank Adaptation for Cross-Graph Transfer Learning [17.85404473268992]
Graph Neural Networks (GNNs) have demonstrated remarkable proficiency in handling a range of graph analytical tasks.
Despite their versatility, GNNs face significant challenges in transferability, limiting their utility in real-world applications.
We propose GraphLoRA, an effective and parameter-efficient method for transferring well-trained GNNs to diverse graph domains.
arXiv Detail & Related papers (2024-09-25T06:57:42Z) - SA-GDA: Spectral Augmentation for Graph Domain Adaptation [38.71041292000361]
Graph neural networks (GNNs) have achieved impressive impressions for graph-related tasks.
This paper presents the textitSpectral Augmentation for Graph Domain Adaptation (method) for graph node classification.
We develop a dual graph convolutional network to jointly exploits local and global consistency for feature aggregation.
arXiv Detail & Related papers (2024-08-17T13:01:45Z) - Finding Diverse and Predictable Subgraphs for Graph Domain
Generalization [88.32356432272356]
This paper focuses on out-of-distribution generalization on graphs where performance drops due to the unseen distribution shift.
We propose a new graph domain generalization framework, dubbed as DPS, by constructing multiple populations from the source domains.
Experiments on both node-level and graph-level benchmarks shows that the proposed DPS achieves impressive performance for various graph domain generalization tasks.
arXiv Detail & Related papers (2022-06-19T07:57:56Z) - Source Free Unsupervised Graph Domain Adaptation [60.901775859601685]
Unsupervised Graph Domain Adaptation (UGDA) shows its practical value of reducing the labeling cost for node classification.
Most existing UGDA methods heavily rely on the labeled graph in the source domain.
In some real-world scenarios, the source graph is inaccessible because of privacy issues.
We propose a novel scenario named Source Free Unsupervised Graph Domain Adaptation (SFUGDA)
arXiv Detail & Related papers (2021-12-02T03:18:18Z) - AdaGNN: A multi-modal latent representation meta-learner for GNNs based
on AdaBoosting [0.38073142980733]
Graph Neural Networks (GNNs) focus on extracting intrinsic network features.
We propose boosting-based meta learner for GNNs.
AdaGNN performs exceptionally well for applications with rich and diverse node neighborhood information.
arXiv Detail & Related papers (2021-08-14T03:07:26Z) - Identity-aware Graph Neural Networks [63.6952975763946]
We develop a class of message passing Graph Neural Networks (ID-GNNs) with greater expressive power than the 1-WL test.
ID-GNN extends existing GNN architectures by inductively considering nodes' identities during message passing.
We show that transforming existing GNNs to ID-GNNs yields on average 40% accuracy improvement on challenging node, edge, and graph property prediction tasks.
arXiv Detail & Related papers (2021-01-25T18:59:01Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Towards Deeper Graph Neural Networks with Differentiable Group
Normalization [61.20639338417576]
Graph neural networks (GNNs) learn the representation of a node by aggregating its neighbors.
Over-smoothing is one of the key issues which limit the performance of GNNs as the number of layers increases.
We introduce two over-smoothing metrics and a novel technique, i.e., differentiable group normalization (DGN)
arXiv Detail & Related papers (2020-06-12T07:18:02Z) - Generalization and Representational Limits of Graph Neural Networks [46.20253808402385]
We prove that several important graph properties cannot be computed by graph neural networks (GNNs) that rely entirely on local information.
We provide the first data dependent generalization bounds for message passing GNNs.
Our bounds are much tighter than existing VC-dimension based guarantees for GNNs, and are comparable to Rademacher bounds for recurrent neural networks.
arXiv Detail & Related papers (2020-02-14T18:10:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.