GP2F: Cross-Domain Graph Prompting with Adaptive Fusion of Pre-trained Graph Neural Networks
- URL: http://arxiv.org/abs/2602.11629v1
- Date: Thu, 12 Feb 2026 06:25:21 GMT
- Title: GP2F: Cross-Domain Graph Prompting with Adaptive Fusion of Pre-trained Graph Neural Networks
- Authors: Dongxiao He, Wenxuan Sun, Yongqi Huang, Jitao Zhao, Di Jin,
- Abstract summary: Graph Prompt Learning (GPL) has emerged as a promising paradigm for downstream adaptation of pre-trained graph models.<n>We propose GP2F, a dual-branch method that explicitly instantiates the two extremes: (1) a frozen branch that retains pre-trained knowledge, and (2) an adapted branch with lightweight adapters for task-specific adaptation.
- Score: 26.435071565801394
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Prompt Learning (GPL) has recently emerged as a promising paradigm for downstream adaptation of pre-trained graph models, mitigating the misalignment between pre-training objectives and downstream tasks. Recently, the focus of GPL has shifted from in-domain to cross-domain scenarios, which is closer to the real world applications, where the pre-training source and downstream target often differ substantially in data distribution. However, why GPLs remain effective under such domain shifts is still unexplored. Empirically, we observe that representative GPL methods are competitive with two simple baselines in cross-domain settings: full fine-tuning (FT) and linear probing (LP), motivating us to explore a deeper understanding of the prompting mechanism. We provide a theoretical analysis demonstrating that jointly leveraging these two complementary branches yields a smaller estimation error than using either branch alone, formally proving that cross-domain GPL benefits from the integration between pre-trained knowledge and task-specific adaptation. Based on this insight, we propose GP2F, a dual-branch GPL method that explicitly instantiates the two extremes: (1) a frozen branch that retains pre-trained knowledge, and (2) an adapted branch with lightweight adapters for task-specific adaptation. We then perform adaptive fusion under topology constraints via a contrastive loss and a topology-consistent loss. Extensive experiments on cross-domain few-shot node and graph classification demonstrate that our method outperforms existing methods.
Related papers
- One Prompt Fits All: Universal Graph Adaptation for Pretrained Models [43.705631137295114]
Graph Prompt Learning (GPL) has emerged as a promising paradigm that bridges graph pretraining models and downstream scenarios.<n>We propose UniPrompt, a novel method that adapts any pretrained models, unleashing the capability of pretrained models while preserving the input graph.
arXiv Detail & Related papers (2025-09-26T14:39:31Z) - Pave Your Own Path: Graph Gradual Domain Adaptation on Fused Gromov-Wasserstein Geodesics [59.07903030446756]
Graph neural networks are highly vulnerable to distribution shifts on graphs.<n>We present Gadget, the first framework for non-IID graph data.<n> Gadget can be seamlessly integrated with existing graph DA methods to handle large shifts on graphs.
arXiv Detail & Related papers (2025-05-19T05:03:58Z) - Gradual Domain Adaptation for Graph Learning [15.648578809740414]
We present a graph gradual domain adaptation (GGDA) framework, which constructs a compact domain sequence that minimizes information loss during adaptation.<n>Our framework provides implementable upper and lower bounds for the intractable inter-domain Wasserstein distance, $W_p(mu_t,mu_t+1)$, enabling its flexible adjustment for optimal domain formation.
arXiv Detail & Related papers (2025-01-29T06:48:59Z) - Degree-Conscious Spiking Graph for Cross-Domain Adaptation [51.58506501415558]
Spiking Graph Networks (SGNs) have demonstrated significant potential in graph classification.<n>We introduce a novel framework named Degree-Consicious Spiking Graph for Cross-Domain Adaptation (DeSGraDA)<n>DeSGraDA enhances generalization across domains with three key components.
arXiv Detail & Related papers (2024-10-09T13:45:54Z) - Pre-trained Graphformer-based Ranking at Web-scale Search (Extended Abstract) [56.55728466130238]
We introduce the novel MPGraf model, which aims to integrate the regression capabilities of Transformers with the link prediction strengths of GNNs.
We conduct extensive offline and online experiments to rigorously evaluate the performance of MPGraf.
arXiv Detail & Related papers (2024-09-25T03:33:47Z) - Rethinking Propagation for Unsupervised Graph Domain Adaptation [17.443218657417454]
Unlabelled Graph Domain Adaptation (UGDA) aims to transfer knowledge from a labelled source graph to an unsupervised target graph.
We propose a simple yet effective approach called A2GNN for graph domain adaptation.
arXiv Detail & Related papers (2024-02-08T13:24:57Z) - Domain Adaptive Graph Classification [0.0]
We introduce the Dual Adversarial Graph Representation Learning (DAGRL), which explore the graph topology from dual branches and mitigate domain discrepancies via dual adversarial learning.
Our approach incorporates adaptive perturbations into the dual branches, which align the source and target distribution to address domain discrepancies.
arXiv Detail & Related papers (2023-12-21T02:37:56Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Learning Domain-invariant Graph for Adaptive Semi-supervised Domain
Adaptation with Few Labeled Source Samples [65.55521019202557]
Domain adaptation aims to generalize a model from a source domain to tackle tasks in a related but different target domain.
Traditional domain adaptation algorithms assume that enough labeled data, which are treated as the prior knowledge are available in the source domain.
We propose a Domain-invariant Graph Learning (DGL) approach for domain adaptation with only a few labeled source samples.
arXiv Detail & Related papers (2020-08-21T08:13:25Z) - Supervised Domain Adaptation using Graph Embedding [86.3361797111839]
Domain adaptation methods assume that distributions between the two domains are shifted and attempt to realign them.
We propose a generic framework based on graph embedding.
We show that the proposed approach leads to a powerful Domain Adaptation framework.
arXiv Detail & Related papers (2020-03-09T12:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.