GNNs Meet Sequence Models Along the Shortest-Path: an Expressive Method for Link Prediction
- URL: http://arxiv.org/abs/2507.07138v1
- Date: Wed, 09 Jul 2025 01:37:19 GMT
- Title: GNNs Meet Sequence Models Along the Shortest-Path: an Expressive Method for Link Prediction
- Authors: Francesco Ferrini, Veronica Lachi, Antonio Longa, Bruno Lepri, Andrea Passerini,
- Abstract summary: SP4LP (Shortest Path for Link Prediction) is a novel framework that combines GNN-based node encodings with sequence modeling over shortest paths.<n>We show that SP4LP achieves state-of-the-art performance across link prediction benchmarks.
- Score: 11.810758962687427
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) often struggle to capture the link-specific structural patterns crucial for accurate link prediction, as their node-centric message-passing schemes overlook the subgraph structures connecting a pair of nodes. Existing methods to inject such structural context either incur high computational cost or rely on simplistic heuristics (e.g., common neighbor counts) that fail to model multi-hop dependencies. We introduce SP4LP (Shortest Path for Link Prediction), a novel framework that combines GNN-based node encodings with sequence modeling over shortest paths. Specifically, SP4LP first applies a GNN to compute representations for all nodes, then extracts the shortest path between each candidate node pair and processes the resulting sequence of node embeddings using a sequence model. This design enables SP4LP to capture expressive multi-hop relational patterns with computational efficiency. Empirically, SP4LP achieves state-of-the-art performance across link prediction benchmarks. Theoretically, we prove that SP4LP is strictly more expressive than standard message-passing GNNs and several state-of-the-art structural features methods, establishing it as a general and principled approach for link prediction in graphs.
Related papers
- Learning Scalable Structural Representations for Link Prediction with
Bloom Signatures [39.63963077346406]
Graph neural networks (GNNs) are known to perform sub-optimally on link prediction tasks.
We propose to learn structural link representations by augmenting the message-passing framework of GNNs with Bloom signatures.
Our proposed model achieves comparable or better performance than existing edge-wise GNN models.
arXiv Detail & Related papers (2023-12-28T02:21:40Z) - Recurrent Distance Filtering for Graph Representation Learning [34.761926988427284]
Graph neural networks based on iterative one-hop message passing have been shown to struggle in harnessing the information from distant nodes effectively.
We propose a new architecture to reconcile these challenges.
Our model aggregates other nodes by their shortest distances to the target and uses a linear RNN to encode the sequence of hop representations.
arXiv Detail & Related papers (2023-12-03T23:36:16Z) - Efficient Link Prediction via GNN Layers Induced by Negative Sampling [86.87385758192566]
Graph neural networks (GNNs) for link prediction can loosely be divided into two broad categories.<n>We propose a novel GNN architecture whereby the emphforward pass explicitly depends on emphboth positive (as is typical) and negative (unique to our approach) edges.<n>This is achieved by recasting the embeddings themselves as minimizers of a forward-pass-specific energy function that favors separation of positive and negative samples.
arXiv Detail & Related papers (2023-10-14T07:02:54Z) - Pure Message Passing Can Estimate Common Neighbor for Link Prediction [25.044734252779975]
We study the proficiency of MPNNs in approximating Common Neighbor (CN)
We introduce the Message Passing Link Predictor (MPLP), a novel link prediction model.
arXiv Detail & Related papers (2023-09-02T16:20:41Z) - Deep Graph Neural Networks via Posteriori-Sampling-based Node-Adaptive Residual Module [65.81781176362848]
Graph Neural Networks (GNNs) can learn from graph-structured data through neighborhood information aggregation.
As the number of layers increases, node representations become indistinguishable, which is known as over-smoothing.
We propose a textbfPosterior-Sampling-based, Node-distinguish Residual module (PSNR).
arXiv Detail & Related papers (2023-05-09T12:03:42Z) - Improving Graph Neural Networks on Multi-node Tasks with the Labeling Trick [31.448841287258116]
We study using graph neural networks (GNNs) for multi-node representation learning.<n>A common practice is to directly aggregate single-node representations obtained by a GNN.<n>We propose textlabeling trick, which first labels nodes in the graph according to their relationships with the target node set before applying a GNN.
arXiv Detail & Related papers (2023-04-20T04:03:40Z) - Temporal Aggregation and Propagation Graph Neural Networks for Dynamic
Representation [67.26422477327179]
Temporal graphs exhibit dynamic interactions between nodes over continuous time.
We propose a novel method of temporal graph convolution with the whole neighborhood.
Our proposed TAP-GNN outperforms existing temporal graph methods by a large margin in terms of both predictive performance and online inference latency.
arXiv Detail & Related papers (2023-04-15T08:17:18Z) - Structure Enhanced Graph Neural Networks for Link Prediction [6.872826041648584]
We propose Structure Enhanced Graph neural network (SEG) for link prediction.
SEG incorporates surrounding topological information of target nodes into an ordinary GNN model.
Experiments on the OGB link prediction datasets demonstrate that SEG achieves state-of-the-art results.
arXiv Detail & Related papers (2022-01-14T03:49:30Z) - VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using
Vector Quantization [70.8567058758375]
VQ-GNN is a universal framework to scale up any convolution-based GNNs using Vector Quantization (VQ) without compromising the performance.
Our framework avoids the "neighbor explosion" problem of GNNs using quantized representations combined with a low-rank version of the graph convolution matrix.
arXiv Detail & Related papers (2021-10-27T11:48:50Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Policy-GNN: Aggregation Optimization for Graph Neural Networks [60.50932472042379]
Graph neural networks (GNNs) aim to model the local graph structures and capture the hierarchical patterns by aggregating the information from neighbors.
It is a challenging task to develop an effective aggregation strategy for each node, given complex graphs and sparse features.
We propose Policy-GNN, a meta-policy framework that models the sampling procedure and message passing of GNNs into a combined learning process.
arXiv Detail & Related papers (2020-06-26T17:03:06Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z) - Graph Sequential Network for Reasoning over Sequences [38.766982479196926]
We consider a novel case where reasoning is needed over graphs built from sequences.
Existing GNN models fulfill this goal by first summarizing the node sequences into fixed-dimensional vectors, then applying GNN on these vectors.
We propose a new type of GNN called Graph Sequential Network (GSN), which features a new message passing algorithm based on co-attention between a node and each of its neighbors.
arXiv Detail & Related papers (2020-04-04T19:18:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.