Positional Encoding meets Persistent Homology on Graphs
- URL: http://arxiv.org/abs/2506.05814v1
- Date: Fri, 06 Jun 2025 07:22:17 GMT
- Title: Positional Encoding meets Persistent Homology on Graphs
- Authors: Yogesh Verma, Amauri H. Souza, Vikas Garg,
- Abstract summary: Positional encoding (PE) and Persistent Homology (PH) have emerged as promising approaches to mitigate this issue.<n>We provide a rigorous theoretical characterization of the relative merits and shortcomings of PE and PH.<n>Our insights inform the design of a novel learnable method, PiPE, which is provably more expressive than both PH and PE.
- Score: 6.314000948709255
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The local inductive bias of message-passing graph neural networks (GNNs) hampers their ability to exploit key structural information (e.g., connectivity and cycles). Positional encoding (PE) and Persistent Homology (PH) have emerged as two promising approaches to mitigate this issue. PE schemes endow GNNs with location-aware features, while PH methods enhance GNNs with multiresolution topological features. However, a rigorous theoretical characterization of the relative merits and shortcomings of PE and PH has remained elusive. We bridge this gap by establishing that neither paradigm is more expressive than the other, providing novel constructions where one approach fails but the other succeeds. Our insights inform the design of a novel learnable method, PiPE (Persistence-informed Positional Encoding), which is provably more expressive than both PH and PE. PiPE demonstrates strong performance across a variety of tasks (e.g., molecule property prediction, graph classification, and out-of-distribution generalization), thereby advancing the frontiers of graph representation learning. Code is available at https://github.com/Aalto-QuML/PIPE.
Related papers
- Learning Efficient Positional Encodings with Graph Neural Networks [109.8653020407373]
We introduce PEARL, a novel framework of learnable PEs for graphs.<n>PEARL approximates equivariant functions of eigenvectors with linear complexity, while rigorously establishing its stability and high expressive power.<n>Our analysis demonstrates that PEARL approximates equivariant functions of eigenvectors with linear complexity, while rigorously establishing its stability and high expressive power.
arXiv Detail & Related papers (2025-02-03T07:28:53Z) - What Are Good Positional Encodings for Directed Graphs? [13.076497906728333]
We introduce the notion of Walk Profile, a generalization of walk-counting sequences for directed graphs.
We propose a novel Multi-q Magnetic Laplacian PE, which extends the Magnetic Laplacian eigenvector-based PE by incorporating multiple potential factors.
Our numerical experiments validate the expressiveness of the proposed PEs and demonstrate their effectiveness in solving sorting network satisfiability.
arXiv Detail & Related papers (2024-07-30T15:38:14Z) - PHLP: Sole Persistent Homology for Link Prediction - Interpretable Feature Extraction [2.8413459430736396]
Link prediction (LP) is a significant research area in graph data, where a link represents essential information on relationships between nodes.<n>Although graph neural network (GNN)-based models have achieved high performance in LP, understanding why they perform well is challenging because most comprise complex neural networks.<n>We propose a novel method that employs PH for LP (PHLP) focusing on how the presence or absence of target links influences the overall topology.
arXiv Detail & Related papers (2024-04-23T16:54:56Z) - Boosting Graph Pooling with Persistent Homology [8.477383770884508]
naively plugging PH features into GNN layers always results in marginal improvement with low interpretability.
We investigate a novel mechanism for injecting global topological invariance into pooling layers using PH, motivated by the observation that filtration operation in PH naturally aligns graph pooling in a cut-off manner.
Experimentally, we apply our mechanism to a collection of graph pooling methods and observe consistent and substantial performance gain over several popular datasets.
arXiv Detail & Related papers (2024-02-26T07:00:24Z) - Going beyond persistent homology using persistent homology [5.724311218570011]
We introduce a novel concept of color-separating sets to provide a complete resolution to this important problem.
We propose RePHINE for learning topological features on graphs.
arXiv Detail & Related papers (2023-11-10T16:12:35Z) - Quantifying the Optimization and Generalization Advantages of Graph Neural Networks Over Multilayer Perceptrons [50.33260238739837]
Graph networks (GNNs) have demonstrated remarkable capabilities in learning from graph-structured data.<n>There remains a lack of analysis comparing GNNs and generalizations from an optimization and generalization perspective.
arXiv Detail & Related papers (2023-06-24T10:21:11Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - Graph Neural Networks with Learnable Structural and Positional
Representations [83.24058411666483]
A major issue with arbitrary graphs is the absence of canonical positional information of nodes.
We introduce Positional nodes (PE) of nodes, and inject it into the input layer, like in Transformers.
We observe a performance increase for molecular datasets, from 2.87% up to 64.14% when considering learnable PE for both GNN classes.
arXiv Detail & Related papers (2021-10-15T05:59:15Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - Improving Graph Neural Network Expressivity via Subgraph Isomorphism
Counting [63.04999833264299]
"Graph Substructure Networks" (GSN) is a topologically-aware message passing scheme based on substructure encoding.
We show that it is strictly more expressive than the Weisfeiler-Leman (WL) graph isomorphism test.
We perform an extensive evaluation on graph classification and regression tasks and obtain state-of-the-art results in diverse real-world settings.
arXiv Detail & Related papers (2020-06-16T15:30:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.