What Are Good Positional Encodings for Directed Graphs?
- URL: http://arxiv.org/abs/2407.20912v2
- Date: Fri, 4 Oct 2024 08:32:25 GMT
- Title: What Are Good Positional Encodings for Directed Graphs?
- Authors: Yinan Huang, Haoyu Wang, Pan Li,
- Abstract summary: We introduce the notion of Walk Profile, a generalization of walk-counting sequences for directed graphs.
We propose a novel Multi-q Magnetic Laplacian PE, which extends the Magnetic Laplacian eigenvector-based PE by incorporating multiple potential factors.
Our numerical experiments validate the expressiveness of the proposed PEs and demonstrate their effectiveness in solving sorting network satisfiability.
- Score: 13.076497906728333
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Positional encodings (PEs) are essential for building powerful and expressive graph neural networks and graph transformers, as they effectively capture the relative spatial relationships between nodes. Although extensive research has been devoted to PEs in undirected graphs, PEs for directed graphs remain relatively unexplored. This work seeks to address this gap. We first introduce the notion of Walk Profile, a generalization of walk-counting sequences for directed graphs. A walk profile encompasses numerous structural features crucial for directed graph-relevant applications, such as program analysis and circuit performance prediction. We identify the limitations of existing PE methods in representing walk profiles and propose a novel Multi-q Magnetic Laplacian PE, which extends the Magnetic Laplacian eigenvector-based PE by incorporating multiple potential factors. The new PE can provably express walk profiles. Furthermore, we generalize prior basis-invariant neural networks to enable the stable use of the new PE in the complex domain. Our numerical experiments validate the expressiveness of the proposed PEs and demonstrate their effectiveness in solving sorting network satisfiability and performing well on general circuit benchmarks. Our code is available at https://github.com/Graph-COM/Multi-q-Maglap.
Related papers
- Learning Efficient Positional Encodings with Graph Neural Networks [109.8653020407373]
We introduce PEARL, a novel framework of learnable PEs for graphs.
PEARL approximates equivariant functions of eigenvectors with linear complexity, while rigorously establishing its stability and high expressive power.
Our analysis demonstrates that PEARL approximates equivariant functions of eigenvectors with linear complexity, while rigorously establishing its stability and high expressive power.
arXiv Detail & Related papers (2025-02-03T07:28:53Z) - Range-aware Positional Encoding via High-order Pretraining: Theory and Practice [14.521929085104441]
Unsupervised pre-training on vast amounts of graph data is critical in real-world applications wherein labeled data is limited.
We propose a novel pre-training strategy on graphs that focuses on modeling their multi-resolution structural information.
Our approach relies solely on the graph structure, it is also domain-agnostic and adaptable to datasets from various domains.
arXiv Detail & Related papers (2024-09-27T19:53:10Z) - Graph Positional and Structural Encoder [11.647944336315346]
We present the first-ever graph encoder designed to capture rich PSE representations for augmenting any GNN.
GPSE learns an efficient common latent representation for multiple PSEs, and is highly transferable.
We show that GPSE-enhanced models can significantly outperform those that employ explicitly computed PSEs, and at least match their performance in others.
arXiv Detail & Related papers (2023-07-14T01:04:18Z) - Bridging Graph Position Encodings for Transformers with Weighted
Graph-Walking Automata [20.539191533339427]
We introduce a new graph PE, Graph Automaton PE (GAPE), based on weighted graph-walking automata.
We compare the performance of GAPE with other PE schemes on both machine translation and graph-structured tasks.
arXiv Detail & Related papers (2022-12-13T20:51:03Z) - Affinity-Aware Graph Networks [9.888383815189176]
Graph Neural Networks (GNNs) have emerged as a powerful technique for learning on relational data.
We explore the use of affinity measures as features in graph neural networks.
We propose message passing networks based on these features and evaluate their performance on a variety of node and graph property prediction tasks.
arXiv Detail & Related papers (2022-06-23T18:51:35Z) - Handling Distribution Shifts on Graphs: An Invariance Perspective [78.31180235269035]
We formulate the OOD problem on graphs and develop a new invariant learning approach, Explore-to-Extrapolate Risk Minimization (EERM)
EERM resorts to multiple context explorers that are adversarially trained to maximize the variance of risks from multiple virtual environments.
We prove the validity of our method by theoretically showing its guarantee of a valid OOD solution.
arXiv Detail & Related papers (2022-02-05T02:31:01Z) - Graph Neural Networks with Learnable Structural and Positional
Representations [83.24058411666483]
A major issue with arbitrary graphs is the absence of canonical positional information of nodes.
We introduce Positional nodes (PE) of nodes, and inject it into the input layer, like in Transformers.
We observe a performance increase for molecular datasets, from 2.87% up to 64.14% when considering learnable PE for both GNN classes.
arXiv Detail & Related papers (2021-10-15T05:59:15Z) - Simple Graph Convolutional Networks [72.92604941595019]
We propose simple graph convolution operators, that can be implemented in single-layer graph convolutional networks.
We show that our convolution operators are more theoretically grounded than many proposals in literature, and exhibit state-of-the-art predictive performance on the considered benchmark datasets.
arXiv Detail & Related papers (2021-06-10T15:23:59Z) - Scaling Graph Neural Networks with Approximate PageRank [64.92311737049054]
We present the PPRGo model which utilizes an efficient approximation of information diffusion in GNNs.
In addition to being faster, PPRGo is inherently scalable, and can be trivially parallelized for large datasets like those found in industry settings.
We show that training PPRGo and predicting labels for all nodes in this graph takes under 2 minutes on a single machine, far outpacing other baselines on the same graph.
arXiv Detail & Related papers (2020-07-03T09:30:07Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.