Rewiring with Positional Encodings for Graph Neural Networks
- URL: http://arxiv.org/abs/2201.12674v4
- Date: Wed, 13 Dec 2023 13:18:05 GMT
- Title: Rewiring with Positional Encodings for Graph Neural Networks
- Authors: Rickard Br\"uel-Gabrielsson, Mikhail Yurochkin, Justin Solomon
- Abstract summary: Several recent works use positional encodings to extend receptive fields of graph neural network layers equipped with attention mechanisms.
We use positional encodings to expand receptive fields to $r$-hop neighborhoods.
We obtain improvements on a variety of models and datasets and reach competitive performance using traditional GNNs or graph Transformers.
- Score: 37.394229290996364
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Several recent works use positional encodings to extend the receptive fields
of graph neural network (GNN) layers equipped with attention mechanisms. These
techniques, however, extend receptive fields to the complete graph, at
substantial computational cost and risking a change in the inductive biases of
conventional GNNs, or require complex architecture adjustments. As a
conservative alternative, we use positional encodings to expand receptive
fields to $r$-hop neighborhoods. More specifically, our method augments the
input graph with additional nodes/edges and uses positional encodings as node
and/or edge features. We thus modify graphs before inputting them to a
downstream GNN model, instead of modifying the model itself. This makes our
method model-agnostic, i.e., compatible with any of the existing GNN
architectures. We also provide examples of positional encodings that are
lossless with a one-to-one map between the original and the modified graphs. We
demonstrate that extending receptive fields via positional encodings and a
virtual fully-connected node significantly improves GNN performance and
alleviates over-squashing using small $r$. We obtain improvements on a variety
of models and datasets and reach competitive performance using traditional GNNs
or graph Transformers.
Related papers
- Spatio-Spectral Graph Neural Networks [50.277959544420455]
We propose Spatio-Spectral Graph Networks (S$2$GNNs)
S$2$GNNs combine spatially and spectrally parametrized graph filters.
We show that S$2$GNNs vanquish over-squashing and yield strictly tighter approximation-theoretic error bounds than MPGNNs.
arXiv Detail & Related papers (2024-05-29T14:28:08Z) - Degree-based stratification of nodes in Graph Neural Networks [66.17149106033126]
We modify the Graph Neural Network (GNN) architecture so that the weight matrices are learned, separately, for the nodes in each group.
This simple-to-implement modification seems to improve performance across datasets and GNN methods.
arXiv Detail & Related papers (2023-12-16T14:09:23Z) - Recurrent Distance Filtering for Graph Representation Learning [34.761926988427284]
Graph neural networks based on iterative one-hop message passing have been shown to struggle in harnessing the information from distant nodes effectively.
We propose a new architecture to reconcile these challenges.
Our model aggregates other nodes by their shortest distances to the target and uses a linear RNN to encode the sequence of hop representations.
arXiv Detail & Related papers (2023-12-03T23:36:16Z) - T-GAE: Transferable Graph Autoencoder for Network Alignment [79.89704126746204]
T-GAE is a graph autoencoder framework that leverages transferability and stability of GNNs to achieve efficient network alignment without retraining.
Our experiments demonstrate that T-GAE outperforms the state-of-the-art optimization method and the best GNN approach by up to 38.7% and 50.8%, respectively.
arXiv Detail & Related papers (2023-10-05T02:58:29Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - Graph Neural Networks with Learnable Structural and Positional
Representations [83.24058411666483]
A major issue with arbitrary graphs is the absence of canonical positional information of nodes.
We introduce Positional nodes (PE) of nodes, and inject it into the input layer, like in Transformers.
We observe a performance increase for molecular datasets, from 2.87% up to 64.14% when considering learnable PE for both GNN classes.
arXiv Detail & Related papers (2021-10-15T05:59:15Z) - AdaGNN: A multi-modal latent representation meta-learner for GNNs based
on AdaBoosting [0.38073142980733]
Graph Neural Networks (GNNs) focus on extracting intrinsic network features.
We propose boosting-based meta learner for GNNs.
AdaGNN performs exceptionally well for applications with rich and diverse node neighborhood information.
arXiv Detail & Related papers (2021-08-14T03:07:26Z) - Graph Attention Networks with Positional Embeddings [7.552100672006174]
Graph Neural Networks (GNNs) are deep learning methods which provide the current state of the art performance in node classification tasks.
We propose a framework, termed Graph Attentional Networks with Positional Embeddings (GAT-POS), to enhance GATs with positional embeddings.
We show that GAT-POS reaches remarkable improvement compared to strong GNN baselines and recent structural embedding enhanced GNNs on non-homophilic graphs.
arXiv Detail & Related papers (2021-05-09T22:13:46Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - Self-Enhanced GNN: Improving Graph Neural Networks Using Model Outputs [20.197085398581397]
Graph neural networks (GNNs) have received much attention recently because of their excellent performance on graph-based tasks.
We propose self-enhanced GNN (SEG), which improves the quality of the input data using the outputs of existing GNN models.
SEG consistently improves the performance of well-known GNN models such as GCN, GAT and SGC across different datasets.
arXiv Detail & Related papers (2020-02-18T12:27:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.