LEAP: Local ECT-Based Learnable Positional Encodings for Graphs
- URL: http://arxiv.org/abs/2510.00757v1
- Date: Wed, 01 Oct 2025 10:44:01 GMT
- Title: LEAP: Local ECT-Based Learnable Positional Encodings for Graphs
- Authors: Juan Amboage, Ernst Röell, Patrick Schnider, Bastian Rieck,
- Abstract summary: Graph positional encoding (PE) has emerged as a promising direction to address these limitations.<n>We propose LEAP, a new end-to-end trainable local structural PE for graphs.<n>Our results underline the potential of LEAP-based encodings as a powerful component for graph representation learning pipelines.
- Score: 15.740515132423205
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks (GNNs) largely rely on the message-passing paradigm, where nodes iteratively aggregate information from their neighbors. Yet, standard message passing neural networks (MPNNs) face well-documented theoretical and practical limitations. Graph positional encoding (PE) has emerged as a promising direction to address these limitations. The Euler Characteristic Transform (ECT) is an efficiently computable geometric-topological invariant that characterizes shapes and graphs. In this work, we combine the differentiable approximation of the ECT (DECT) and its local variant ($\ell$-ECT) to propose LEAP, a new end-to-end trainable local structural PE for graphs. We evaluate our approach on multiple real-world datasets as well as on a synthetic task designed to test its ability to extract topological features. Our results underline the potential of LEAP-based encodings as a powerful component for graph representation learning pipelines.
Related papers
- Efficient Environmental Claim Detection with Hyperbolic Graph Neural Networks [1.7259898169307608]
We explore Graph Neural Networks (GNNs) and Hyperbolic Graph Neural Networks (HGNNs) as lightweight yet effective alternatives to transformer-based models.<n>Our results show that our graph-based models, particularly HGNNs in the poincar'e space (P-HGNNs), achieve performance superior to the state-of-the-art on environmental claim detection.
arXiv Detail & Related papers (2025-02-19T11:04:59Z) - Learning Efficient Positional Encodings with Graph Neural Networks [109.8653020407373]
We introduce PEARL, a novel framework of learnable PEs for graphs.<n>PEARL approximates equivariant functions of eigenvectors with linear complexity, while rigorously establishing its stability and high expressive power.<n>Our analysis demonstrates that PEARL approximates equivariant functions of eigenvectors with linear complexity, while rigorously establishing its stability and high expressive power.
arXiv Detail & Related papers (2025-02-03T07:28:53Z) - Diss-l-ECT: Dissecting Graph Data with Local Euler Characteristic Transforms [13.608942872770855]
We introduce the Local Euler Characteristic Transform ($ell$-ECT) to enhance expressivity and interpretability in graph representation learning.<n>Our method exhibits superior performance compared to standard Graph Neural Networks (GNNs) on a variety of node-classification tasks.
arXiv Detail & Related papers (2024-10-03T16:02:02Z) - Scalable Graph Compressed Convolutions [68.85227170390864]
We propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution.
Based on the graph calibration, we propose the Compressed Convolution Network (CoCN) for hierarchical graph representation learning.
arXiv Detail & Related papers (2024-07-26T03:14:13Z) - A new perspective on building efficient and expressive 3D equivariant
graph neural networks [39.0445472718248]
We propose a hierarchy of 3D isomorphism to evaluate the expressive power of equivariant GNNs.
Our work leads to two crucial modules for designing expressive and efficient geometric GNNs.
To demonstrate the applicability of our theory, we propose LEFTNet which effectively implements these modules.
arXiv Detail & Related papers (2023-04-07T18:08:27Z) - Learnable Filters for Geometric Scattering Modules [64.03877398967282]
We propose a new graph neural network (GNN) module based on relaxations of recently proposed geometric scattering transforms.
Our learnable geometric scattering (LEGS) module enables adaptive tuning of the wavelets to encourage band-pass features to emerge in learned representations.
arXiv Detail & Related papers (2022-08-15T22:30:07Z) - Graph Neural Networks with Learnable Structural and Positional
Representations [83.24058411666483]
A major issue with arbitrary graphs is the absence of canonical positional information of nodes.
We introduce Positional nodes (PE) of nodes, and inject it into the input layer, like in Transformers.
We observe a performance increase for molecular datasets, from 2.87% up to 64.14% when considering learnable PE for both GNN classes.
arXiv Detail & Related papers (2021-10-15T05:59:15Z) - GraphiT: Encoding Graph Structure in Transformers [37.33808493548781]
We show that viewing graphs as sets of node features and structural and positional information is able to outperform representations learned with classical graph neural networks (GNNs)
Our model, GraphiT, encodes such information by (i) leveraging relative positional encoding strategies in self-attention scores based on positive definite kernels on graphs, and (ii) enumerating and encoding local sub-structures such as paths of short length.
arXiv Detail & Related papers (2021-06-10T11:36:22Z) - Data-Driven Learning of Geometric Scattering Networks [74.3283600072357]
We propose a new graph neural network (GNN) module based on relaxations of recently proposed geometric scattering transforms.
Our learnable geometric scattering (LEGS) module enables adaptive tuning of the wavelets to encourage band-pass features to emerge in learned representations.
arXiv Detail & Related papers (2020-10-06T01:20:27Z) - Building powerful and equivariant graph neural networks with structural
message-passing [74.93169425144755]
We propose a powerful and equivariant message-passing framework based on two ideas.
First, we propagate a one-hot encoding of the nodes, in addition to the features, in order to learn a local context matrix around each node.
Second, we propose methods for the parametrization of the message and update functions that ensure permutation equivariance.
arXiv Detail & Related papers (2020-06-26T17:15:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.