One Prompt Fits All: Universal Graph Adaptation for Pretrained Models
- URL: http://arxiv.org/abs/2509.22416v2
- Date: Tue, 14 Oct 2025 16:01:27 GMT
- Title: One Prompt Fits All: Universal Graph Adaptation for Pretrained Models
- Authors: Yongqi Huang, Jitao Zhao, Dongxiao He, Xiaobao Wang, Yawen Li, Yuxiao Huang, Di Jin, Zhiyong Feng,
- Abstract summary: Graph Prompt Learning (GPL) has emerged as a promising paradigm that bridges graph pretraining models and downstream scenarios.<n>We propose UniPrompt, a novel method that adapts any pretrained models, unleashing the capability of pretrained models while preserving the input graph.
- Score: 43.705631137295114
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Prompt Learning (GPL) has emerged as a promising paradigm that bridges graph pretraining models and downstream scenarios, mitigating label dependency and the misalignment between upstream pretraining and downstream tasks. Although existing GPL studies explore various prompt strategies, their effectiveness and underlying principles remain unclear. We identify two critical limitations: (1) Lack of consensus on underlying mechanisms: Despite current GPLs have advanced the field, there is no consensus on how prompts interact with pretrained models, as different strategies intervene at varying spaces within the model, i.e., input-level, layer-wise, and representation-level prompts. (2) Limited scenario adaptability: Most methods fail to generalize across diverse downstream scenarios, especially under data distribution shifts (e.g., homophilic-to-heterophilic graphs). To address these issues, we theoretically analyze existing GPL approaches and reveal that representation-level prompts essentially function as fine-tuning a simple downstream classifier, proposing that graph prompt learning should focus on unleashing the capability of pretrained models, and the classifier should adapt to downstream scenarios. Based on our findings, we propose UniPrompt, a novel GPL method that adapts any pretrained models, unleashing the capability of pretrained models while preserving the input graph. Extensive experiments demonstrate that our method can effectively integrate with various pretrained models and achieve strong performance across in-domain and cross-domain scenarios.
Related papers
- GP2F: Cross-Domain Graph Prompting with Adaptive Fusion of Pre-trained Graph Neural Networks [26.435071565801394]
Graph Prompt Learning (GPL) has emerged as a promising paradigm for downstream adaptation of pre-trained graph models.<n>We propose GP2F, a dual-branch method that explicitly instantiates the two extremes: (1) a frozen branch that retains pre-trained knowledge, and (2) an adapted branch with lightweight adapters for task-specific adaptation.
arXiv Detail & Related papers (2026-02-12T06:25:21Z) - A Scalable Pretraining Framework for Link Prediction with Efficient Adaptation [16.82426251068573]
Link Prediction (LP) is a critical task in graph machine learning.<n>Existing methods face key challenges including limited supervision from sparse connectivity.<n>We explore pretraining as a solution to address these challenges.
arXiv Detail & Related papers (2025-08-06T17:10:31Z) - HGMP:Heterogeneous Graph Multi-Task Prompt Learning [18.703129208282913]
We propose a novel multi-task prompt framework for the heterogeneous graph domain, named HGMP.<n>First, to bridge the gap between the pre-trained model and downstream tasks, we reformulate all downstream tasks into a unified graph-level task format.<n>We design a graph-level contrastive pre-training strategy to better leverage heterogeneous information and enhance performance in multi-task scenarios.
arXiv Detail & Related papers (2025-07-10T04:01:47Z) - Multiple Stochastic Prompt Tuning for Few-shot Adaptation under Extreme Domain Shift [14.85375816073596]
We introduce multiple learnable prompts per class to capture diverse modes in visual representations arising from distribution shifts.<n>These prompts are modeled as learnable Gaussian distributions, enabling efficient exploration of the prompt parameter space.<n>Experiments and comparisons with state-of-the-art methods demonstrate the effectiveness of the proposed framework.
arXiv Detail & Related papers (2025-06-04T13:18:04Z) - Graph Data Selection for Domain Adaptation: A Model-Free Approach [54.27731120381295]
Graph domain adaptation (GDA) is a fundamental task in graph machine learning.<n>We propose a novel model-free framework, GRADATE, that selects the best training data from the source domain for the classification task on the target domain.<n>We show GRADATE outperforms existing selection methods and enhances off-the-shelf GDA methods with much fewer training data.
arXiv Detail & Related papers (2025-05-22T21:18:39Z) - Prompt Tuning Vision Language Models with Margin Regularizer for Few-Shot Learning under Distribution Shifts [13.21626568246313]
We analyze whether vision-language foundation models can be adapted to target datasets with very different distributions and classes.<n>We propose a novel prompt-tuning method, PromptMargin, for adapting such large-scale VLMs directly on the few target samples.<n>PromptMargin effectively tunes the text as well as visual prompts for this task, and has two main modules.
arXiv Detail & Related papers (2025-05-21T13:26:56Z) - Raising the Bar in Graph OOD Generalization: Invariant Learning Beyond Explicit Environment Modeling [61.222803636132554]
Real-world graph data often exhibit diverse and shifting environments that traditional models fail to generalize across.<n>We propose a novel method termed Multi-Prototype Hyperspherical Invariant Learning (MPHIL)<n>MPHIL achieves state-of-the-art performance, significantly outperforming existing methods across graph data from various domains and with different distribution shifts.
arXiv Detail & Related papers (2025-02-15T07:40:14Z) - Can Graph Neural Networks Learn Language with Extremely Weak Text Supervision? [62.12375949429938]
We propose a multi-modal prompt learning paradigm to adapt pre-trained Graph Neural Networks to downstream tasks and data.<n>Our new paradigm embeds the graphs directly in the same space as the Large Language Models (LLMs) by learning both graph prompts and text prompts simultaneously.<n>We build the first CLIP-style zero-shot classification prototype that can generalize GNNs to unseen classes with extremely weak text supervision.
arXiv Detail & Related papers (2024-12-11T08:03:35Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - Universal Prompt Tuning for Graph Neural Networks [10.250964386142819]
We introduce a universal prompt-based tuning method called Graph Prompt Feature (GPF) for pre-trained GNN models under any pre-training strategy.
GPF operates on the input graph's feature space and can theoretically achieve an equivalent effect to any form of prompting function.
Our method significantly outperforms existing specialized prompt-based tuning methods when applied to models utilizing the pre-training strategy they specialize in.
arXiv Detail & Related papers (2022-09-30T05:19:27Z) - Adaptive Fine-Grained Predicates Learning for Scene Graph Generation [122.4588401267544]
General Scene Graph Generation (SGG) models tend to predict head predicates and re-balancing strategies prefer tail categories.
We propose an Adaptive Fine-Grained Predicates Learning (FGPL-A) which aims at differentiating hard-to-distinguish predicates for SGG.
Our proposed model-agnostic strategy significantly boosts performance of benchmark models on VG-SGG and GQA-SGG datasets by up to 175% and 76% on Mean Recall@100, achieving new state-of-the-art performance.
arXiv Detail & Related papers (2022-07-11T03:37:57Z) - Similarity-aware Positive Instance Sampling for Graph Contrastive
Pre-training [82.68805025636165]
We propose to select positive graph instances directly from existing graphs in the training set.
Our selection is based on certain domain-specific pair-wise similarity measurements.
Besides, we develop an adaptive node-level pre-training method to dynamically mask nodes to distribute them evenly in the graph.
arXiv Detail & Related papers (2022-06-23T20:12:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.