Unsupervised Prompting for Graph Neural Networks
- URL: http://arxiv.org/abs/2505.16903v1
- Date: Thu, 22 May 2025 17:03:20 GMT
- Title: Unsupervised Prompting for Graph Neural Networks
- Authors: Peyman Baghershahi, Sourav Medya,
- Abstract summary: We introduce a challenging problem setup to evaluate GNN prompting methods.<n>We propose a fully unsupervised prompting method based on consistency regularization through pseudo-labeling.<n>Our unsupervised approach outperforms the state-of-the-art prompting methods that have access to labels.
- Score: 3.2752005091619076
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Prompt tuning methods for Graph Neural Networks (GNNs) have become popular to address the semantic gap between pre-training and fine-tuning steps. However, existing GNN prompting methods rely on labeled data and involve lightweight fine-tuning for downstream tasks. Meanwhile, in-context learning methods for Large Language Models (LLMs) have shown promising performance with no parameter updating and no or minimal labeled data. Inspired by these approaches, in this work, we first introduce a challenging problem setup to evaluate GNN prompting methods. This setup encourages a prompting function to enhance a pre-trained GNN's generalization to a target dataset under covariate shift without updating the GNN's parameters and with no labeled data. Next, we propose a fully unsupervised prompting method based on consistency regularization through pseudo-labeling. We use two regularization techniques to align the prompted graphs' distribution with the original data and reduce biased predictions. Through extensive experiments under our problem setting, we demonstrate that our unsupervised approach outperforms the state-of-the-art prompting methods that have access to labels.
Related papers
- LOBSTUR: A Local Bootstrap Framework for Tuning Unsupervised Representations in Graph Neural Networks [0.9208007322096533]
Graph Neural Networks (GNNs) are increasingly used in conjunction with unsupervised learning techniques to learn powerful node representations.<n>We propose a novel framework designed to adapt bootstrapping techniques for unsupervised graph representation learning.
arXiv Detail & Related papers (2025-05-20T19:59:35Z) - Edge Prompt Tuning for Graph Neural Networks [40.62424370491229]
We propose EdgePrompt, a simple yet effective graph prompt tuning method from the perspective of edges.<n>Our method is compatible with prevalent GNN architectures pre-trained under various pre-training strategies.
arXiv Detail & Related papers (2025-03-02T06:07:54Z) - GNN-MultiFix: Addressing the pitfalls for GNNs for multi-label node classification [1.857645719601748]
Graph neural networks (GNNs) have emerged as powerful models for learning representations of graph data.
We show that even the most expressive GNN may fail to learn in absence of node attributes and without using explicit label information as input.
We propose a straightforward approach, referred to as GNN-MultiFix, that integrates the feature, label, and positional information of a node.
arXiv Detail & Related papers (2024-11-21T12:59:39Z) - Classifying Nodes in Graphs without GNNs [50.311528896010785]
We propose a fully GNN-free approach for node classification, not requiring them at train or test time.
Our method consists of three key components: smoothness constraints, pseudo-labeling iterations and neighborhood-label histograms.
arXiv Detail & Related papers (2024-02-08T18:59:30Z) - GNNEvaluator: Evaluating GNN Performance On Unseen Graphs Without Labels [81.93520935479984]
We study a new problem, GNN model evaluation, that aims to assess the performance of a specific GNN model trained on labeled and observed graphs.
We propose a two-stage GNN model evaluation framework, including (1) DiscGraph set construction and (2) GNNEvaluator training and inference.
Under the effective training supervision from the DiscGraph set, GNNEvaluator learns to precisely estimate node classification accuracy of the to-be-evaluated GNN model.
arXiv Detail & Related papers (2023-10-23T05:51:59Z) - RegExplainer: Generating Explanations for Graph Neural Networks in Regression Tasks [10.473178462412584]
We propose a novel explanation method to interpret the graph regression models (XAIG-R)
Our method addresses the distribution shifting problem and continuously ordered decision boundary issues.
We present a self-supervised learning strategy to tackle the continuously ordered labels in regression tasks.
arXiv Detail & Related papers (2023-07-15T16:16:22Z) - Every Node Counts: Improving the Training of Graph Neural Networks on
Node Classification [9.539495585692007]
We propose novel objective terms for the training of GNNs for node classification.
Our first term seeks to maximize the mutual information between node and label features.
Our second term promotes anisotropic smoothness in the prediction maps.
arXiv Detail & Related papers (2022-11-29T23:25:14Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Very Deep Graph Neural Networks Via Noise Regularisation [57.450532911995516]
Graph Neural Networks (GNNs) perform learned message passing over an input graph.
We train a deep GNN with up to 100 message passing steps and achieve several state-of-the-art results.
arXiv Detail & Related papers (2021-06-15T08:50:10Z) - Combining Label Propagation and Simple Models Out-performs Graph Neural
Networks [52.121819834353865]
We show that for many standard transductive node classification benchmarks, we can exceed or match the performance of state-of-the-art GNNs.
We call this overall procedure Correct and Smooth (C&S)
Our approach exceeds or nearly matches the performance of state-of-the-art GNNs on a wide variety of benchmarks.
arXiv Detail & Related papers (2020-10-27T02:10:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.