SGPT: Few-Shot Prompt Tuning for Signed Graphs
- URL: http://arxiv.org/abs/2412.12155v2
- Date: Sun, 17 Aug 2025 16:07:53 GMT
- Title: SGPT: Few-Shot Prompt Tuning for Signed Graphs
- Authors: Zian Zhai, Sima Qing, Xiaoyang Wang, Wenjie Zhang,
- Abstract summary: Signed Graph Prompt Tuning (SGPT) is a graph prompting framework that adapts pre-trained unsigned GNNs to few-shot signed graph tasks.<n>We conduct experiments on seven benchmark signed graph datasets, demonstrating that SGPT significantly outperforms existing state-of-the-art methods.
- Score: 8.42756062274768
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Signed Graph Neural Networks (SGNNs) are effective in learning expressive representations for signed graphs but typically require substantial task-specific labels, limiting their applicability in label-scarce industrial scenarios. In contrast, unsigned graph structures are abundant and can be readily leveraged to pre-train Graph Neural Networks (GNNs), offering a promising solution to reduce supervision requirements in downstream signed graph tasks. However, transferring knowledge from unsigned to signed graphs is non-trivial due to the fundamental discrepancies in graph types and task objectives between pre-training and downstream phases. To address this challenge, we propose Signed Graph Prompt Tuning (SGPT), a novel graph prompting framework that adapts pre-trained unsigned GNNs to few-shot signed graph tasks. We first design a graph template based on balance theory to disentangle mixed node relationships introduced by negative links, mitigating the structural mismatches between unsigned and signed graphs. We further introduce a task template that reformulates downstream signed tasks into a unified link prediction objective, aligning their optimization goals with the pre-training task. Furthermore, we develop feature prompts that align downstream semantic spaces with the feature spaces learned during pre-training, and semantic prompts to integrate link sign semantics in a task-aware manner. We conduct extensive experiments on seven benchmark signed graph datasets, demonstrating that SGPT significantly outperforms existing state-of-the-art methods, establishing a powerful and generalizable solution for few-shot signed graph learning.
Related papers
- Semi-supervised Instruction Tuning for Large Language Models on Text-Attributed Graphs [62.544129365882014]
We propose a novel Semi-supervised Instruction Tuning pipeline for Graph Learning, named SIT-Graph.<n> SIT-Graph is model-agnostic and can be seamlessly integrated into any graph instruction tuning method that utilizes LLMs as the predictor.<n>Extensive experiments demonstrate that when incorporated into state-of-the-art graph instruction tuning methods, SIT-Graph significantly enhances their performance on text-attributed graph benchmarks.
arXiv Detail & Related papers (2026-01-19T08:10:53Z) - UniGTE: Unified Graph-Text Encoding for Zero-Shot Generalization across Graph Tasks and Domains [12.05107789697386]
We introduce UniGTE, an instruction-tuned encoder-decoder framework that unifies structural and semantic reasoning.<n>UniGTE is instruction-tuned on five datasets spanning node-level, edge-level, and graph-level tasks across diverse domains.<n>It achieves new state-of-the-art zero-shot results on node classification, link prediction, graph classification, and graph regression under cross-task and cross-domain settings.
arXiv Detail & Related papers (2025-10-19T15:36:45Z) - Revisiting Graph Neural Networks on Graph-level Tasks: Comprehensive Experiments, Analysis, and Improvements [54.006506479865344]
We propose a unified evaluation framework for graph-level Graph Neural Networks (GNNs)<n>This framework provides a standardized setting to evaluate GNNs across diverse datasets.<n>We also propose a novel GNN model with enhanced expressivity and generalization capabilities.
arXiv Detail & Related papers (2025-01-01T08:48:53Z) - Towards Graph Foundation Models: Learning Generalities Across Graphs via Task-Trees [50.78679002846741]
We propose a novel approach to cross-task generalization in graphs via task-trees.<n>We show that pretraining a graph neural network (GNN) on diverse task-trees with a reconstruction objective induces transferable knowledge.<n>This enables efficient adaptation to downstream tasks with minimal fine-tuning.
arXiv Detail & Related papers (2024-12-21T02:07:43Z) - Instance-Aware Graph Prompt Learning [71.26108600288308]
We introduce Instance-Aware Graph Prompt Learning (IA-GPL) in this paper.
The process involves generating intermediate prompts for each instance using a lightweight architecture.
Experiments conducted on multiple datasets and settings showcase the superior performance of IA-GPL compared to state-of-the-art baselines.
arXiv Detail & Related papers (2024-11-26T18:38:38Z) - GraphAlign: Pretraining One Graph Neural Network on Multiple Graphs via Feature Alignment [30.56443056293688]
Graph self-supervised learning (SSL) holds considerable promise for mining and learning with graph-structured data.
In this work, we aim to pretrain one graph neural network (GNN) on a varied collection of graphs endowed with rich node features.
We present a general GraphAlign method that can be seamlessly integrated into the existing graph SSL framework.
arXiv Detail & Related papers (2024-06-05T05:22:32Z) - Generalized Graph Prompt: Toward a Unification of Pre-Training and Downstream Tasks on Graphs [20.406549548630156]
GraphPrompt is a novel pre-training and prompting framework on graphs.
It unifies pre-training and downstream tasks into a common task template.
It also employs a learnable prompt to assist a downstream task in locating the most relevant knowledge from the pre-trained model.
arXiv Detail & Related papers (2023-11-26T14:35:28Z) - GraphGPT: Graph Instruction Tuning for Large Language Models [27.036935149004726]
Graph Neural Networks (GNNs) have evolved to understand graph structures.
To enhance robustness, self-supervised learning (SSL) has become a vital tool for data augmentation.
Our research tackles this by advancing graph model generalization in zero-shot learning environments.
arXiv Detail & Related papers (2023-10-19T06:17:46Z) - Deep Prompt Tuning for Graph Transformers [55.2480439325792]
Fine-tuning is resource-intensive and requires storing multiple copies of large models.
We propose a novel approach called deep graph prompt tuning as an alternative to fine-tuning.
By freezing the pre-trained parameters and only updating the added tokens, our approach reduces the number of free parameters and eliminates the need for multiple model copies.
arXiv Detail & Related papers (2023-09-18T20:12:17Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - Multi-View Graph Representation Learning Beyond Homophily [2.601278669926709]
Unsupervised graph representation learning(GRL) aims to distill diverse graph information into task-agnostic embeddings without label supervision.
A novel framework, denoted as Multi-view Graph(MVGE) is proposed, and a set of key designs are identified.
arXiv Detail & Related papers (2023-04-15T08:35:49Z) - GraphPrompt: Unifying Pre-Training and Downstream Tasks for Graph Neural
Networks [16.455234748896157]
GraphPrompt is a novel pre-training and prompting framework on graphs.
It unifies pre-training and downstream tasks into a common task template.
It also employs a learnable prompt to assist a downstream task in locating the most relevant knowledge from the pre-train model.
arXiv Detail & Related papers (2023-02-16T02:51:38Z) - Similarity-aware Positive Instance Sampling for Graph Contrastive
Pre-training [82.68805025636165]
We propose to select positive graph instances directly from existing graphs in the training set.
Our selection is based on certain domain-specific pair-wise similarity measurements.
Besides, we develop an adaptive node-level pre-training method to dynamically mask nodes to distribute them evenly in the graph.
arXiv Detail & Related papers (2022-06-23T20:12:51Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Signed Graph Diffusion Network [17.20546861491478]
Given a signed social graph, how can we learn appropriate node representations to infer the signs of missing edges?
We propose Signed Graph Diffusion Network (SGDNet), a novel graph neural network that achieves end-to-end node representation learning for link sign prediction in signed social graphs.
arXiv Detail & Related papers (2020-12-28T11:08:30Z) - Contrastive and Generative Graph Convolutional Networks for Graph-based
Semi-Supervised Learning [64.98816284854067]
Graph-based Semi-Supervised Learning (SSL) aims to transfer the labels of a handful of labeled data to the remaining massive unlabeled data via a graph.
A novel GCN-based SSL algorithm is presented in this paper to enrich the supervision signals by utilizing both data similarities and graph structure.
arXiv Detail & Related papers (2020-09-15T13:59:28Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.