Wavelet-Induced Rotary Encodings: RoPE Meets Graphs
- URL: http://arxiv.org/abs/2509.22259v2
- Date: Mon, 29 Sep 2025 18:41:25 GMT
- Title: Wavelet-Induced Rotary Encodings: RoPE Meets Graphs
- Authors: Isaac Reid, Arijit Sehanobish, Cederik Höfs, Bruno Mlodozeniec, Leonhard Vulpius, Federico Barbero, Adrian Weller, Krzysztof Choromanski, Richard E. Turner, Petar Veličković,
- Abstract summary: WIRE extends Rotary Position s to graph-structured data.<n>We demonstrate that WIRE is more general than RoPE, recovering the latter in the special case of grid graphs.<n>We test WIRE on a range of synthetic and real-world algorithm tasks.
- Score: 51.129737804765334
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce WIRE: Wavelet-Induced Rotary Encodings. WIRE extends Rotary Position Encodings (RoPE), a popular algorithm in LLMs and ViTs, to graph-structured data. We demonstrate that WIRE is more general than RoPE, recovering the latter in the special case of grid graphs. WIRE also enjoys a host of desirable theoretical properties, including equivariance under node ordering permutation, compatibility with linear attention, and (under select assumptions) asymptotic dependence on graph resistive distance. We test WIRE on a range of synthetic and real-world tasks, including identifying monochromatic subgraphs, semantic segmentation of point clouds, and more standard graph benchmarks. We find it to be effective in settings where the underlying graph structure is important.
Related papers
- SWING: Unlocking Implicit Graph Representations for Graph Random Features [57.956136773668476]
We propose SWING: Space Walks for Implicit Network Graphs, a new class of algorithms for computations involving Graph Random Features on graphs.<n>We provide detailed analysis of SWING and complement it with thorough experiments on different classes of i-graphs.
arXiv Detail & Related papers (2026-02-13T08:12:38Z) - GLANCE: Graph Logic Attention Network with Cluster Enhancement for Heterophilous Graph Representation Learning [54.60090631330295]
Graph Neural Networks (GNNs) have demonstrated significant success in learning from graph-structured data but often struggle on heterophilous graphs.<n>We propose GLANCE, a novel framework that integrates logic-guided reasoning, dynamic graph refinement, and adaptive clustering to enhance graph representation learning.
arXiv Detail & Related papers (2025-07-24T15:45:26Z) - Disentangled Graph Representation Based on Substructure-Aware Graph Optimal Matching Kernel Convolutional Networks [4.912298804026689]
Graphs effectively characterize relational data, driving graph representation learning methods.<n>Recent disentangled graph representation learning enhances interpretability by decoupling independent factors in graph data.<n>This paper proposes the Graph Optimal Matching Kernel Convolutional Network (GOMKCN) to address this limitation.
arXiv Detail & Related papers (2025-04-23T02:26:33Z) - Range-aware Positional Encoding via High-order Pretraining: Theory and Practice [14.521929085104441]
Unsupervised pre-training on vast amounts of graph data is critical in real-world applications wherein labeled data is limited.
We propose a novel pre-training strategy on graphs that focuses on modeling their multi-resolution structural information.
Our approach relies solely on the graph structure, it is also domain-agnostic and adaptable to datasets from various domains.
arXiv Detail & Related papers (2024-09-27T19:53:10Z) - Joint Graph Rewiring and Feature Denoising via Spectral Resonance [10.850726111343063]
We propose an algorithm to jointlynoise and rewire the noisy graph.<n>We show that it consistently outperforms existing methods on a range of synthetic and realworld node classification tasks.
arXiv Detail & Related papers (2024-08-13T20:16:11Z) - Greener GRASS: Enhancing GNNs with Encoding, Rewiring, and Attention [12.409982249220812]
We introduce Graph Attention with Structures (GRASS), a novel GNN architecture, to enhance graph relative attention.<n>GRASS rewires the input graph by superimposing a random regular graph to achieve long-range information propagation.<n>It also employs a novel additive attention mechanism tailored for graph-structured data.
arXiv Detail & Related papers (2024-07-08T06:21:56Z) - Rethinking Explaining Graph Neural Networks via Non-parametric Subgraph
Matching [68.35685422301613]
We propose a novel non-parametric subgraph matching framework, dubbed MatchExplainer, to explore explanatory subgraphs.
It couples the target graph with other counterpart instances and identifies the most crucial joint substructure by minimizing the node corresponding-based distance.
Experiments on synthetic and real-world datasets show the effectiveness of our MatchExplainer by outperforming all state-of-the-art parametric baselines with significant margins.
arXiv Detail & Related papers (2023-01-07T05:14:45Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Spectral Graph Convolutional Networks With Lifting-based Adaptive Graph
Wavelets [81.63035727821145]
Spectral graph convolutional networks (SGCNs) have been attracting increasing attention in graph representation learning.
We propose a novel class of spectral graph convolutional networks that implement graph convolutions with adaptive graph wavelets.
arXiv Detail & Related papers (2021-08-03T17:57:53Z) - GraphiT: Encoding Graph Structure in Transformers [37.33808493548781]
We show that viewing graphs as sets of node features and structural and positional information is able to outperform representations learned with classical graph neural networks (GNNs)
Our model, GraphiT, encodes such information by (i) leveraging relative positional encoding strategies in self-attention scores based on positive definite kernels on graphs, and (ii) enumerating and encoding local sub-structures such as paths of short length.
arXiv Detail & Related papers (2021-06-10T11:36:22Z) - Dirichlet Graph Variational Autoencoder [65.94744123832338]
We present Dirichlet Graph Variational Autoencoder (DGVAE) with graph cluster memberships as latent factors.
Motivated by the low pass characteristics in balanced graph cut, we propose a new variant of GNN named Heatts to encode the input graph into cluster memberships.
arXiv Detail & Related papers (2020-10-09T07:35:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.