Gradual Domain Adaptation for Graph Learning
- URL: http://arxiv.org/abs/2501.17443v3
- Date: Thu, 28 Aug 2025 11:01:38 GMT
- Title: Gradual Domain Adaptation for Graph Learning
- Authors: Pui Ieng Lei, Ximing Chen, Yijun Sheng, Yanyan Liu, Zhiguo Gong, Qiang Yang,
- Abstract summary: We present a graph gradual domain adaptation (GGDA) framework, which constructs a compact domain sequence that minimizes information loss during adaptation.<n>Our framework provides implementable upper and lower bounds for the intractable inter-domain Wasserstein distance, $W_p(mu_t,mu_t+1)$, enabling its flexible adjustment for optimal domain formation.
- Score: 15.648578809740414
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing machine learning literature lacks graph-based domain adaptation techniques capable of handling large distribution shifts, primarily due to the difficulty in simulating a coherent evolutionary path from source to target graph. To meet this challenge, we present a graph gradual domain adaptation (GGDA) framework, which constructs a compact domain sequence that minimizes information loss during adaptation. Our approach starts with an efficient generation of knowledge-preserving intermediate graphs over the Fused Gromov-Wasserstein (FGW) metric. A GGDA domain sequence is then constructed upon this bridging data pool through a novel vertex-based progression, which involves selecting "close" vertices and performing adaptive domain advancement to enhance inter-domain transferability. Theoretically, our framework provides implementable upper and lower bounds for the intractable inter-domain Wasserstein distance, $W_p(\mu_t,\mu_{t+1})$, enabling its flexible adjustment for optimal domain formation. Extensive experiments across diverse transfer scenarios demonstrate the superior performance of our GGDA framework.
Related papers
- Learning Structure-Semantic Evolution Trajectories for Graph Domain Adaptation [30.83176170397593]
Graph Domain Adaptation aims to bridge distribution shifts between domains by transferring knowledge from well-labeled source graphs to given unlabeled target graphs.<n>One promising approach addresses graph transfer by discretizing the adaptation process, typically through the construction of intermediate graphs or stepwise alignment procedures.<n>We propose textbfDiffGDA, a continuous-time generative process that models the domain adaptation process as a continuous-timegenerative process.
arXiv Detail & Related papers (2026-02-11T04:11:04Z) - Graph Data Selection for Domain Adaptation: A Model-Free Approach [54.27731120381295]
Graph domain adaptation (GDA) is a fundamental task in graph machine learning.<n>We propose a novel model-free framework, GRADATE, that selects the best training data from the source domain for the classification task on the target domain.<n>We show GRADATE outperforms existing selection methods and enhances off-the-shelf GDA methods with much fewer training data.
arXiv Detail & Related papers (2025-05-22T21:18:39Z) - Cross-Domain Diffusion with Progressive Alignment for Efficient Adaptive Retrieval [52.67656818203429]
Unsupervised efficient domain adaptive retrieval aims to transfer knowledge from a labeled source domain to an unlabeled target domain.<n>Existing methods fail to address potential noise in the target domain, and directly align high-level features across domains.<n>We propose a novel Cross-Domain Diffusion with Progressive Alignment method (COUPLE) to address these challenges.
arXiv Detail & Related papers (2025-05-20T04:17:39Z) - Pave Your Own Path: Graph Gradual Domain Adaptation on Fused Gromov-Wasserstein Geodesics [59.07903030446756]
Graph neural networks are highly vulnerable to distribution shifts on graphs.<n>We present Gadget, the first framework for non-IID graph data.<n> Gadget can be seamlessly integrated with existing graph DA methods to handle large shifts on graphs.
arXiv Detail & Related papers (2025-05-19T05:03:58Z) - AGLP: A Graph Learning Perspective for Semi-supervised Domain Adaptation [13.472532378889264]
In semi-supervised domain adaptation (SSDA), the model aims to leverage partially labeled target domain data along with a large amount of labeled source domain data.
This paper proposes a graph learning perspective (AGLP) for semi-supervised domain adaptation.
We apply the graph convolutional network to the instance graph which allows structural information to propagate along the weighted graph edges.
arXiv Detail & Related papers (2024-11-20T09:41:41Z) - Adaptive Graph Integration for Cross-Domain Recommendation via Heterogeneous Graph Coordinators [31.05975545409408]
Leveraging multi-domain data can improve recommendation systems by enriching user insights and mitigating data sparsity in individual domains.<n>We propose HAGO, a novel framework with textbfHeterogeneous textbfAdaptive textbfGraph cotextbfOrdinators.<n>Our framework adaptively adjusts the connections between coordinators and multi-domain graph nodes to enhance beneficial inter-domain interactions.
arXiv Detail & Related papers (2024-10-15T15:50:53Z) - Degree-Conscious Spiking Graph for Cross-Domain Adaptation [51.58506501415558]
Spiking Graph Networks (SGNs) have demonstrated significant potential in graph classification.<n>We introduce a novel framework named Degree-Consicious Spiking Graph for Cross-Domain Adaptation (DeSGraDA)<n>DeSGraDA enhances generalization across domains with three key components.
arXiv Detail & Related papers (2024-10-09T13:45:54Z) - StyDeSty: Min-Max Stylization and Destylization for Single Domain Generalization [85.18995948334592]
Single domain generalization (single DG) aims at learning a robust model generalizable to unseen domains from only one training domain.
State-of-the-art approaches have mostly relied on data augmentations, such as adversarial perturbation and style enhancement, to synthesize new data.
We propose emphStyDeSty, which explicitly accounts for the alignment of the source and pseudo domains in the process of data augmentation.
arXiv Detail & Related papers (2024-06-01T02:41:34Z) - Improve Cross-domain Mixed Sampling with Guidance Training for Adaptive Segmentation [9.875170018805768]
Unsupervised Domain Adaptation (UDA) endeavors to adjust models trained on a source domain to perform well on a target domain without requiring additional annotations.
We propose a novel auxiliary task called Guidance Training.
This task facilitates the effective utilization of cross-domain mixed sampling techniques while mitigating distribution shifts from the real world.
We demonstrate the efficacy of our approach by integrating it with existing methods, consistently improving performance.
arXiv Detail & Related papers (2024-03-22T07:12:48Z) - SPA: A Graph Spectral Alignment Perspective for Domain Adaptation [41.89873161315133]
Unsupervised domain adaptation (UDA) is a pivotal form in machine learning to extend the in-domain model to the distinctive target domains where the data distributions differ.
Most prior works focus on capturing the inter-domain transferability but largely overlook rich intra-domain structures, which empirically results in even worse discriminability.
We introduce a novel graph SPectral Alignment (SPA) framework to tackle the tradeoff.
arXiv Detail & Related papers (2023-10-26T17:13:48Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Learning Domain-invariant Graph for Adaptive Semi-supervised Domain
Adaptation with Few Labeled Source Samples [65.55521019202557]
Domain adaptation aims to generalize a model from a source domain to tackle tasks in a related but different target domain.
Traditional domain adaptation algorithms assume that enough labeled data, which are treated as the prior knowledge are available in the source domain.
We propose a Domain-invariant Graph Learning (DGL) approach for domain adaptation with only a few labeled source samples.
arXiv Detail & Related papers (2020-08-21T08:13:25Z) - Supervised Domain Adaptation: A Graph Embedding Perspective and a
Rectified Experimental Protocol [87.76993857713217]
We show that Domain Adaptation methods using pair-wise relationships between source and target domain data can be formulated as a Graph Embedding.
Specifically, we analyse the loss functions of three existing state-of-the-art Supervised Domain Adaptation methods and demonstrate that they perform Graph Embedding.
arXiv Detail & Related papers (2020-04-23T15:46:20Z) - Unsupervised Intra-domain Adaptation for Semantic Segmentation through
Self-Supervision [73.76277367528657]
Convolutional neural network-based approaches have achieved remarkable progress in semantic segmentation.
To cope with this limitation, automatically annotated data generated from graphic engines are used to train segmentation models.
We propose a two-step self-supervised domain adaptation approach to minimize the inter-domain and intra-domain gap together.
arXiv Detail & Related papers (2020-04-16T15:24:11Z) - Supervised Domain Adaptation using Graph Embedding [86.3361797111839]
Domain adaptation methods assume that distributions between the two domains are shifted and attempt to realign them.
We propose a generic framework based on graph embedding.
We show that the proposed approach leads to a powerful Domain Adaptation framework.
arXiv Detail & Related papers (2020-03-09T12:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.