Generalized Laplacian Positional Encoding for Graph Representation
Learning
- URL: http://arxiv.org/abs/2210.15956v1
- Date: Fri, 28 Oct 2022 07:21:57 GMT
- Title: Generalized Laplacian Positional Encoding for Graph Representation
Learning
- Authors: Sohir Maskey, Ali Parviz, Maximilian Thiessen, Hannes St\"ark, Ylli
Sadikaj, Haggai Maron
- Abstract summary: Graph neural networks (GNNs) are the primary tool for processing graph-structured data.
Recent works have adapted the idea of positional encodings to graph data.
This paper draws inspiration from the recent success of Laplacian-based positional encoding.
- Score: 15.723716197068574
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) are the primary tool for processing
graph-structured data. Unfortunately, the most commonly used GNNs, called
Message Passing Neural Networks (MPNNs) suffer from several fundamental
limitations. To overcome these limitations, recent works have adapted the idea
of positional encodings to graph data. This paper draws inspiration from the
recent success of Laplacian-based positional encoding and defines a novel
family of positional encoding schemes for graphs. We accomplish this by
generalizing the optimization problem that defines the Laplace embedding to
more general dissimilarity functions rather than the 2-norm used in the
original formulation. This family of positional encodings is then instantiated
by considering p-norms. We discuss a method for calculating these positional
encoding schemes, implement it in PyTorch and demonstrate how the resulting
positional encoding captures different properties of the graph. Furthermore, we
demonstrate that this novel family of positional encodings can improve the
expressive power of MPNNs. Lastly, we present preliminary experimental results.
Related papers
- Scalable Graph Compressed Convolutions [68.85227170390864]
We propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution.
Based on the graph calibration, we propose the Compressed Convolution Network (CoCN) for hierarchical graph representation learning.
arXiv Detail & Related papers (2024-07-26T03:14:13Z) - Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - Improving Subgraph-GNNs via Edge-Level Ego-Network Encodings [3.8711489380602804]
We present a novel edge-level ego-network encoding for learning on graphs.
It can boost Message Passing Graph Neural Networks (MP-GNNs) by providing additional node and edge features.
We show theoretically that such encoding is more expressive than node-based sub-graph MP-GNNs.
arXiv Detail & Related papers (2023-12-10T15:05:23Z) - Recurrent Distance Filtering for Graph Representation Learning [34.761926988427284]
Graph neural networks based on iterative one-hop message passing have been shown to struggle in harnessing the information from distant nodes effectively.
We propose a new architecture to reconcile these challenges.
Our model aggregates other nodes by their shortest distances to the target and uses a linear RNN to encode the sequence of hop representations.
arXiv Detail & Related papers (2023-12-03T23:36:16Z) - Graph Neural Network Bandits [89.31889875864599]
We consider the bandit optimization problem with the reward function defined over graph-structured data.
Key challenges in this setting are scaling to large domains, and to graphs with many nodes.
We show that graph neural networks (GNNs) can be used to estimate the reward function.
arXiv Detail & Related papers (2022-07-13T18:12:36Z) - ResNorm: Tackling Long-tailed Degree Distribution Issue in Graph Neural
Networks via Normalization [80.90206641975375]
This paper focuses on improving the performance of GNNs via normalization.
By studying the long-tailed distribution of node degrees in the graph, we propose a novel normalization method for GNNs.
The $scale$ operation of ResNorm reshapes the node-wise standard deviation (NStd) distribution so as to improve the accuracy of tail nodes.
arXiv Detail & Related papers (2022-06-16T13:49:09Z) - Rewiring with Positional Encodings for Graph Neural Networks [37.394229290996364]
Several recent works use positional encodings to extend receptive fields of graph neural network layers equipped with attention mechanisms.
We use positional encodings to expand receptive fields to $r$-hop neighborhoods.
We obtain improvements on a variety of models and datasets and reach competitive performance using traditional GNNs or graph Transformers.
arXiv Detail & Related papers (2022-01-29T22:26:02Z) - Graph Neural Networks with Learnable Structural and Positional
Representations [83.24058411666483]
A major issue with arbitrary graphs is the absence of canonical positional information of nodes.
We introduce Positional nodes (PE) of nodes, and inject it into the input layer, like in Transformers.
We observe a performance increase for molecular datasets, from 2.87% up to 64.14% when considering learnable PE for both GNN classes.
arXiv Detail & Related papers (2021-10-15T05:59:15Z) - A Robust Alternative for Graph Convolutional Neural Networks via Graph
Neighborhood Filters [84.20468404544047]
We present a family of graph filters (NGFs) that replace the powers of the graph shift operator with $k$-hop neighborhood adjacency matrices.
NGFs help to alleviate the numerical issues of traditional GFs, allow for the design of deeper GCNNs, and enhance the robustness to errors in the topology of the graph.
arXiv Detail & Related papers (2021-10-02T17:05:27Z) - Graph Attention Networks with Positional Embeddings [7.552100672006174]
Graph Neural Networks (GNNs) are deep learning methods which provide the current state of the art performance in node classification tasks.
We propose a framework, termed Graph Attentional Networks with Positional Embeddings (GAT-POS), to enhance GATs with positional embeddings.
We show that GAT-POS reaches remarkable improvement compared to strong GNN baselines and recent structural embedding enhanced GNNs on non-homophilic graphs.
arXiv Detail & Related papers (2021-05-09T22:13:46Z) - Scalable Graph Neural Networks for Heterogeneous Graphs [12.44278942365518]
Graph neural networks (GNNs) are a popular class of parametric model for learning over graph-structured data.
Recent work has argued that GNNs primarily use the graph for feature smoothing, and have shown competitive results on benchmark tasks.
In this work, we ask whether these results can be extended to heterogeneous graphs, which encode multiple types of relationship between different entities.
arXiv Detail & Related papers (2020-11-19T06:03:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.