Enhancing Node-Level Graph Domain Adaptation by Alleviating Local Dependency
- URL: http://arxiv.org/abs/2512.13149v1
- Date: Mon, 15 Dec 2025 10:00:25 GMT
- Title: Enhancing Node-Level Graph Domain Adaptation by Alleviating Local Dependency
- Authors: Xinwei Tai, Dongmian Zou, Hongfei Wang,
- Abstract summary: Transfering knowledge effectively from one graph to another remains a critical challenge.<n>In this paper, we show that conditional shift can be observed only if there exists local dependencies among node features.<n>We propose to improve GDA by decorrelating node features, which can be specifically implemented through decorrelated GCN layers and graph transformer layers.
- Score: 8.229138664380324
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent years have witnessed significant advancements in machine learning methods on graphs. However, transferring knowledge effectively from one graph to another remains a critical challenge. This highlights the need for algorithms capable of applying information extracted from a source graph to an unlabeled target graph, a task known as unsupervised graph domain adaptation (GDA). One key difficulty in unsupervised GDA is conditional shift, which hinders transferability. In this paper, we show that conditional shift can be observed only if there exists local dependencies among node features. To support this claim, we perform a rigorous analysis and also further provide generalization bounds of GDA when dependent node features are modeled using markov chains. Guided by the theoretical findings, we propose to improve GDA by decorrelating node features, which can be specifically implemented through decorrelated GCN layers and graph transformer layers. Our experimental results demonstrate the effectiveness of this approach, showing not only substantial performance enhancements over baseline GDA methods but also clear visualizations of small intra-class distances in the learned representations. Our code is available at https://github.com/TechnologyAiGroup/DFT
Related papers
- Incorporating Spatial Information into Goal-Conditioned Hierarchical Reinforcement Learning via Graph Representations [37.10671332775445]
The integration of graphs with Goal-conditioned Reinforcement Learning (GCHRL) has recently gained attention.<n>Existing approaches typically rely on domain-specific knowledge to construct these graphs.<n>This paper proposes a solution by developing a graph encoder-decoder to evaluate unseen states.
arXiv Detail & Related papers (2025-11-14T00:58:39Z) - Pave Your Own Path: Graph Gradual Domain Adaptation on Fused Gromov-Wasserstein Geodesics [59.07903030446756]
Graph neural networks are highly vulnerable to distribution shifts on graphs.<n>We present Gadget, the first framework for non-IID graph data.<n> Gadget can be seamlessly integrated with existing graph DA methods to handle large shifts on graphs.
arXiv Detail & Related papers (2025-05-19T05:03:58Z) - Revisiting, Benchmarking and Understanding Unsupervised Graph Domain Adaptation [31.106636947179005]
Unsupervised Graph Domain Adaptation involves the transfer of knowledge from a label-rich source graph to an unlabeled target graph.
We present the first comprehensive benchmark for unsupervised graph domain adaptation named GDABench.
We observe that the performance of current UGDA models varies significantly across different datasets and adaptation scenarios.
arXiv Detail & Related papers (2024-07-09T06:44:09Z) - Efficient Graph Similarity Computation with Alignment Regularization [7.143879014059894]
Graph similarity computation (GSC) is a learning-based prediction task using Graph Neural Networks (GNNs)
We show that high-quality learning can be attained with a simple yet powerful regularization technique, which we call the Alignment Regularization (AReg)
In the inference stage, the graph-level representations learned by the GNN encoder are directly used to compute the similarity score without using AReg again to speed up inference.
arXiv Detail & Related papers (2024-06-21T07:37:28Z) - GSINA: Improving Subgraph Extraction for Graph Invariant Learning via
Graph Sinkhorn Attention [52.67633391931959]
Graph invariant learning (GIL) has been an effective approach to discovering the invariant relationships between graph data and its labels.
We propose a novel graph attention mechanism called Graph Sinkhorn Attention (GSINA)
GSINA is able to obtain meaningful, differentiable invariant subgraphs with controllable sparsity and softness.
arXiv Detail & Related papers (2024-02-11T12:57:16Z) - Structural Re-weighting Improves Graph Domain Adaptation [13.019371337183202]
This work examines different impacts of distribution shifts caused by either graph structure or node attributes.
A novel approach, called structural reweighting (StruRW), is proposed to address this issue and is tested on synthetic graphs, four benchmark datasets, and a new application in high energy physics.
arXiv Detail & Related papers (2023-06-05T20:11:30Z) - Learning Strong Graph Neural Networks with Weak Information [64.64996100343602]
We develop a principled approach to the problem of graph learning with weak information (GLWI)
We propose D$2$PT, a dual-channel GNN framework that performs long-range information propagation on the input graph with incomplete structure, but also on a global graph that encodes global semantic similarities.
arXiv Detail & Related papers (2023-05-29T04:51:09Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Source Free Unsupervised Graph Domain Adaptation [60.901775859601685]
Unsupervised Graph Domain Adaptation (UGDA) shows its practical value of reducing the labeling cost for node classification.
Most existing UGDA methods heavily rely on the labeled graph in the source domain.
In some real-world scenarios, the source graph is inaccessible because of privacy issues.
We propose a novel scenario named Source Free Unsupervised Graph Domain Adaptation (SFUGDA)
arXiv Detail & Related papers (2021-12-02T03:18:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.