Source Free Unsupervised Graph Domain Adaptation
- URL: http://arxiv.org/abs/2112.00955v4
- Date: Mon, 4 Dec 2023 19:49:43 GMT
- Title: Source Free Unsupervised Graph Domain Adaptation
- Authors: Haitao Mao, Lun Du, Yujia Zheng, Qiang Fu, Zelin Li, Xu Chen, Shi Han,
Dongmei Zhang
- Abstract summary: Unsupervised Graph Domain Adaptation (UGDA) shows its practical value of reducing the labeling cost for node classification.
Most existing UGDA methods heavily rely on the labeled graph in the source domain.
In some real-world scenarios, the source graph is inaccessible because of privacy issues.
We propose a novel scenario named Source Free Unsupervised Graph Domain Adaptation (SFUGDA)
- Score: 60.901775859601685
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) have achieved great success on a variety of
tasks with graph-structural data, among which node classification is an
essential one. Unsupervised Graph Domain Adaptation (UGDA) shows its practical
value of reducing the labeling cost for node classification. It leverages
knowledge from a labeled graph (i.e., source domain) to tackle the same task on
another unlabeled graph (i.e., target domain). Most existing UGDA methods
heavily rely on the labeled graph in the source domain. They utilize labels
from the source domain as the supervision signal and are jointly trained on
both the source graph and the target graph. However, in some real-world
scenarios, the source graph is inaccessible because of privacy issues.
Therefore, we propose a novel scenario named Source Free Unsupervised Graph
Domain Adaptation (SFUGDA). In this scenario, the only information we can
leverage from the source domain is the well-trained source model, without any
exposure to the source graph and its labels. As a result, existing UGDA methods
are not feasible anymore. To address the non-trivial adaptation challenges in
this practical scenario, we propose a model-agnostic algorithm called SOGA for
domain adaptation to fully exploit the discriminative ability of the source
model while preserving the consistency of structural proximity on the target
graph. We prove the effectiveness of the proposed algorithm both theoretically
and empirically. The experimental results on four cross-domain tasks show
consistent improvements in the Macro-F1 score and Macro-AUC.
Related papers
- GALA: Graph Diffusion-based Alignment with Jigsaw for Source-free Domain Adaptation [13.317620250521124]
Source-free domain adaptation is a crucial machine learning topic, as it contains numerous applications in the real world.
Recent graph neural network (GNN) approaches can suffer from serious performance decline due to domain shift and label scarcity.
We propose a novel method named Graph Diffusion-based Alignment with Jigsaw (GALA), tailored for source-free graph domain adaptation.
arXiv Detail & Related papers (2024-10-22T01:32:46Z) - Rank and Align: Towards Effective Source-free Graph Domain Adaptation [16.941755478093153]
Graph neural networks (GNNs) have achieved impressive performance in graph domain adaptation.
However, extensive source graphs could be unavailable in real-world scenarios due to privacy and storage concerns.
We introduce a novel GNN-based approach called Rank and Align (RNA), which ranks graph similarities with spectral seriation for robust semantics learning.
arXiv Detail & Related papers (2024-08-22T08:00:50Z) - Can Modifying Data Address Graph Domain Adaptation? [20.343259091425708]
Unsupervised Graph Domain Adaptation (UGDA) aims to facilitate knowledge transfer from a labeled source graph to an unlabeled target graph.
We propose GraphAlign, a novel UGDA method that generates a small yet transferable graph.
By exclusively training a GNN on this new graph with classic Empirical Risk Minimization (ERM), GraphAlign attains exceptional performance on the target graph.
arXiv Detail & Related papers (2024-07-27T17:56:31Z) - Revisiting, Benchmarking and Understanding Unsupervised Graph Domain Adaptation [31.106636947179005]
Unsupervised Graph Domain Adaptation involves the transfer of knowledge from a label-rich source graph to an unlabeled target graph.
We present the first comprehensive benchmark for unsupervised graph domain adaptation named GDABench.
We observe that the performance of current UGDA models varies significantly across different datasets and adaptation scenarios.
arXiv Detail & Related papers (2024-07-09T06:44:09Z) - GSINA: Improving Subgraph Extraction for Graph Invariant Learning via
Graph Sinkhorn Attention [52.67633391931959]
Graph invariant learning (GIL) has been an effective approach to discovering the invariant relationships between graph data and its labels.
We propose a novel graph attention mechanism called Graph Sinkhorn Attention (GSINA)
GSINA is able to obtain meaningful, differentiable invariant subgraphs with controllable sparsity and softness.
arXiv Detail & Related papers (2024-02-11T12:57:16Z) - Similarity-aware Positive Instance Sampling for Graph Contrastive
Pre-training [82.68805025636165]
We propose to select positive graph instances directly from existing graphs in the training set.
Our selection is based on certain domain-specific pair-wise similarity measurements.
Besides, we develop an adaptive node-level pre-training method to dynamically mask nodes to distribute them evenly in the graph.
arXiv Detail & Related papers (2022-06-23T20:12:51Z) - Zero-shot Domain Adaptation of Heterogeneous Graphs via Knowledge
Transfer Networks [72.82524864001691]
heterogeneous graph neural networks (HGNNs) have shown superior performance as powerful representation learning techniques.
There is no direct way to learn using labels rooted at different node types.
In this work, we propose a novel domain adaptation method, Knowledge Transfer Networks for HGNNs (HGNN-KTN)
arXiv Detail & Related papers (2022-03-03T21:00:23Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Efficient Variational Graph Autoencoders for Unsupervised Cross-domain
Prerequisite Chains [3.358838755118655]
We introduce Domain-versaational Variational Graph Autoencoders (DAVGAE) to solve this cross-domain prerequisite chain learning task efficiently.
Our novel model consists of a variational graph autoencoder (VGAE) and a domain discriminator.
Results show that our model outperforms recent graph-based computation using only 1/10 graph scale and 1/3 time.
arXiv Detail & Related papers (2021-09-17T19:07:27Z) - Learning Domain-invariant Graph for Adaptive Semi-supervised Domain
Adaptation with Few Labeled Source Samples [65.55521019202557]
Domain adaptation aims to generalize a model from a source domain to tackle tasks in a related but different target domain.
Traditional domain adaptation algorithms assume that enough labeled data, which are treated as the prior knowledge are available in the source domain.
We propose a Domain-invariant Graph Learning (DGL) approach for domain adaptation with only a few labeled source samples.
arXiv Detail & Related papers (2020-08-21T08:13:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.