Mind the Links: Cross-Layer Attention for Link Prediction in Multiplex Networks
- URL: http://arxiv.org/abs/2509.23409v1
- Date: Sat, 27 Sep 2025 16:55:15 GMT
- Title: Mind the Links: Cross-Layer Attention for Link Prediction in Multiplex Networks
- Authors: Devesh Sharma, Aditya Kishore, Ayush Garg, Debajyoti Mazumder, Debasis Mohapatra, Jasabanta Patro,
- Abstract summary: Multiplex graphs capture diverse relations among shared nodes.<n>Most predictors either collapse layers or treat them independently.<n>We frame multiplex link prediction as multi-view edge classification.
- Score: 1.2006896500048552
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multiplex graphs capture diverse relations among shared nodes. Most predictors either collapse layers or treat them independently. This loses crucial inter-layer dependencies and struggles with scalability. To overcome this, we frame multiplex link prediction as multi-view edge classification. For each node pair, we construct a sequence of per-layer edge views and apply cross-layer self-attention to fuse evidence for the target layer. We present two models as instances of this framework: Trans-SLE, a lightweight transformer over static embeddings, and Trans-GAT, which combines layer-specific GAT encoders with transformer fusion. To ensure scalability and fairness, we introduce a Union--Set candidate pool and two leakage-free protocols: cross-layer and inductive subgraph generalization. Experiments on six public multiplex datasets show consistent macro-F_1 gains over strong baselines (MELL, HOPLP-MUL, RMNE). Our approach is simple, scalable, and compatible with both precomputed embeddings and GNN encoders.
Related papers
- LabelFusion: Learning to Fuse LLMs and Transformer Classifiers for Robust Text Classification [0.7611870296994722]
LabelFusion is a fusion ensemble for text classification.<n>It learns to combine a transformer-based classifier with one or more Large Language Models.<n>It achieves 92.4% accuracy on AG News and 92.3% on 10-class Reuters 21578 topic classification.
arXiv Detail & Related papers (2025-12-11T16:39:07Z) - Bridging the Divide: End-to-End Sequence-Graph Learning [47.95529678412846]
We argue that sequences and graphs are not separate problems but complementary facets of the same dataset.<n>We introduce BRIDGE, a unified end-to-end architecture that couples a sequence encoder with a GNN under a single objective.<n>We show that BRIDGE consistently outperforms static GNNs, temporal graph methods, and sequence-only baselines on ranking and classification metrics.
arXiv Detail & Related papers (2025-10-29T03:06:54Z) - ClustViT: Clustering-based Token Merging for Semantic Segmentation [2.661056455199956]
Recent works have focused on dynamically merging tokens according to the image complexity.<n>We propose ClustViT, where we expand upon the Vision Transformer (ViT) backbone and address semantic segmentation.<n>Our approach achieves up to 2.18x fewer GFLOPs and 1.64x faster inference on three different datasets, with comparable segmentation accuracy.
arXiv Detail & Related papers (2025-10-02T12:15:40Z) - MSGCN: Multiplex Spatial Graph Convolution Network for Interlayer Link Weight Prediction [0.27624021966289597]
Link weight prediction has received less emphasis due to its increased complexity compared to binary link classification.<n>We propose a new method named Multiplex Spatial Graph Convolution Network (MSGCN), which spatially embeds information across multiple layers to predict interlayer link weights.<n>The MSGCN model generalizes spatial graph convolution to multiplex networks and captures the geometric structure of nodes across multiple layers.
arXiv Detail & Related papers (2025-04-24T17:08:16Z) - Multigraph Message Passing with Bi-Directional Multi-Edge Aggregations [5.193718340934995]
MEGA-GNN is a unified framework for message passing on multigraphs.<n>We show that MEGA-GNN is not only permutation equivariant but also universal given a strict total ordering on the edges.<n>Experiments show that MEGA-GNN significantly outperforms state-of-the-art solutions by up to 13% on Anti-Money Laundering datasets.
arXiv Detail & Related papers (2024-11-29T20:15:18Z) - Unveiling Induction Heads: Provable Training Dynamics and Feature Learning in Transformers [54.20763128054692]
We study how a two-attention-layer transformer is trained to perform ICL on $n$-gram Markov chain data.
We prove that the gradient flow with respect to a cross-entropy ICL loss converges to a limiting model.
arXiv Detail & Related papers (2024-09-09T18:10:26Z) - Structure-Aware DropEdge Towards Deep Graph Convolutional Networks [83.38709956935095]
Graph Convolutional Networks (GCNs) encounter a remarkable drop in performance when multiple layers are piled up.
Over-smoothing isolates the network output from the input with the increase of network depth, weakening expressivity and trainability.
We investigate refined measures upon DropEdge -- an existing simple yet effective technique to relieve over-smoothing.
arXiv Detail & Related papers (2023-06-21T08:11:40Z) - AMT: All-Pairs Multi-Field Transforms for Efficient Frame Interpolation [80.33846577924363]
We present All-Pairs Multi-Field Transforms (AMT), a new network architecture for video framegithub.
It is based on two essential designs. First, we build bidirectional volumes for all pairs of pixels, and use the predicted bilateral flows to retrieve correlations.
Second, we derive multiple groups of fine-grained flow fields from one pair of updated coarse flows for performing backward warping on the input frames separately.
arXiv Detail & Related papers (2023-04-19T16:18:47Z) - HiFormer: Hierarchical Multi-scale Representations Using Transformers
for Medical Image Segmentation [3.478921293603811]
HiFormer is a novel method that efficiently bridges a CNN and a transformer for medical image segmentation.
To secure a fine fusion of global and local features, we propose a Double-Level Fusion (DLF) module in the skip connection of the encoder-decoder structure.
arXiv Detail & Related papers (2022-07-18T11:30:06Z) - MGAE: Masked Autoencoders for Self-Supervised Learning on Graphs [55.66953093401889]
Masked graph autoencoder (MGAE) framework to perform effective learning on graph structure data.
Taking insights from self-supervised learning, we randomly mask a large proportion of edges and try to reconstruct these missing edges during training.
arXiv Detail & Related papers (2022-01-07T16:48:07Z) - Augmenting Convolutional networks with attention-based aggregation [55.97184767391253]
We show how to augment any convolutional network with an attention-based global map to achieve non-local reasoning.
We plug this learned aggregation layer with a simplistic patch-based convolutional network parametrized by 2 parameters (width and depth)
It yields surprisingly competitive trade-offs between accuracy and complexity, in particular in terms of memory consumption.
arXiv Detail & Related papers (2021-12-27T14:05:41Z) - Graph Cross Networks with Vertex Infomax Pooling [69.38969610952927]
We propose a novel graph cross network (GXN) to achieve comprehensive feature learning from multiple scales of a graph.
Based on trainable hierarchical representations of a graph, GXN enables the interchange of intermediate features across scales to promote information flow.
arXiv Detail & Related papers (2020-10-05T06:34:23Z) - Relation Transformer Network [25.141472361426818]
We propose a novel transformer formulation for scene graph generation and relation prediction.
We leverage the encoder-decoder architecture of the transformer for rich feature embedding of nodes and edges.
Our relation prediction module classifies the directed relation from the learned node and edge embedding.
arXiv Detail & Related papers (2020-04-13T20:47:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.