Graph Positional Encoding via Random Feature Propagation
- URL: http://arxiv.org/abs/2303.02918v3
- Date: Wed, 19 Jul 2023 05:51:00 GMT
- Title: Graph Positional Encoding via Random Feature Propagation
- Authors: Moshe Eliasof, Fabrizio Frasca, Beatrice Bevilacqua, Eran Treister,
Gal Chechik, Haggai Maron
- Abstract summary: Two main families of node feature augmentation schemes have been explored for enhancing GNNs.
We propose a novel family of positional encoding schemes which draws a link between the above two approaches.
We empirically demonstrate that RFP significantly outperforms both spectral PE and random features in multiple node classification and graph classification benchmarks.
- Score: 39.84324765957645
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Two main families of node feature augmentation schemes have been explored for
enhancing GNNs: random features and spectral positional encoding. Surprisingly,
however, there is still no clear understanding of the relation between these
two augmentation schemes. Here we propose a novel family of positional encoding
schemes which draws a link between the above two approaches and improves over
both. The new approach, named Random Feature Propagation (RFP), is inspired by
the power iteration method and its generalizations. It concatenates several
intermediate steps of an iterative algorithm for computing the dominant
eigenvectors of a propagation matrix, starting from random node features.
Notably, these propagation steps are based on graph-dependent propagation
operators that can be either predefined or learned. We explore the theoretical
and empirical benefits of RFP. First, we provide theoretical justifications for
using random features, for incorporating early propagation steps, and for using
multiple random initializations. Then, we empirically demonstrate that RFP
significantly outperforms both spectral PE and random features in multiple node
classification and graph classification benchmarks.
Related papers
- Heating Up Quasi-Monte Carlo Graph Random Features: A Diffusion Kernel Perspective [0.0]
We build upon a recently introduced class of quasi-graph random features (q-GRFs)
We find that the Diffusion kernel performs most similarly to the 2-regularized Laplacian.
We explore graph types that benefit from the previously established antithetic termination procedure.
arXiv Detail & Related papers (2024-10-10T21:51:31Z) - Computing Systemic Risk Measures with Graph Neural Networks [1.6874375111244329]
This paper investigates systemic risk measures for financial networks of explicitly modelled bilateral liabilities.
We study numerical methods for the approximation of systemic risk and optimal random allocations.
arXiv Detail & Related papers (2024-09-30T10:18:13Z) - General Graph Random Features [42.75616308187867]
We propose a novel random walk-based algorithm for unbiased estimation of arbitrary functions of a weighted adjacency matrix.
Our algorithm enjoys subquadratic time complexity with respect to the number of nodes, overcoming the notoriously prohibitive cubic scaling of exact graph kernel evaluation.
arXiv Detail & Related papers (2023-10-07T15:47:31Z) - Discrete Graph Auto-Encoder [52.50288418639075]
We introduce a new framework named Discrete Graph Auto-Encoder (DGAE)
We first use a permutation-equivariant auto-encoder to convert graphs into sets of discrete latent node representations.
In the second step, we sort the sets of discrete latent representations and learn their distribution with a specifically designed auto-regressive model.
arXiv Detail & Related papers (2023-06-13T12:40:39Z) - Graph-adaptive Rectified Linear Unit for Graph Neural Networks [64.92221119723048]
Graph Neural Networks (GNNs) have achieved remarkable success by extending traditional convolution to learning on non-Euclidean data.
We propose Graph-adaptive Rectified Linear Unit (GReLU) which is a new parametric activation function incorporating the neighborhood information in a novel and efficient way.
We conduct comprehensive experiments to show that our plug-and-play GReLU method is efficient and effective given different GNN backbones and various downstream tasks.
arXiv Detail & Related papers (2022-02-13T10:54:59Z) - Top-N: Equivariant set and graph generation without exchangeability [61.24699600833916]
We consider one-shot probabilistic decoders that map a vector-shaped prior to a distribution over sets or graphs.
These functions can be integrated into variational autoencoders (VAE), generative adversarial networks (GAN) or normalizing flows.
Top-n is a deterministic, non-exchangeable set creation mechanism which learns to select the most relevant points from a trainable reference set.
arXiv Detail & Related papers (2021-10-05T14:51:19Z) - SigGPDE: Scaling Sparse Gaussian Processes on Sequential Data [16.463077353773603]
We develop SigGPDE, a new scalable sparse variational inference framework for Gaussian Processes (GPs) on sequential data.
We show that the gradients of the GP signature kernel are solutions of a hyperbolic partial differential equation (PDE)
This theoretical insight allows us to build an efficient back-propagation algorithm to optimize the ELBO.
arXiv Detail & Related papers (2021-05-10T09:10:17Z) - Power Normalizations in Fine-grained Image, Few-shot Image and Graph
Classification [38.84294567166725]
We study Power Normalizations (PN) in the deep learning setup via a novel PN layer pooling feature maps.
We investigate the role and meaning of MaxExp and Gamma, two popular PN functions.
We show that SPN on the autocorrelation/covariance matrix and the Heat Diffusion Process (HDP) on a graph Laplacian matrix are closely related.
arXiv Detail & Related papers (2020-12-27T17:06:06Z) - Permutation-equivariant and Proximity-aware Graph Neural Networks with
Stochastic Message Passing [88.30867628592112]
Graph neural networks (GNNs) are emerging machine learning models on graphs.
Permutation-equivariance and proximity-awareness are two important properties highly desirable for GNNs.
We show that existing GNNs, mostly based on the message-passing mechanism, cannot simultaneously preserve the two properties.
In order to preserve node proximities, we augment the existing GNNs with node representations.
arXiv Detail & Related papers (2020-09-05T16:46:56Z) - Neural Enhanced Belief Propagation on Factor Graphs [85.61562052281688]
A graphical model is a structured representation of locally dependent random variables.
We first extend graph neural networks to factor graphs (FG-GNN)
We then propose a new hybrid model that runs conjointly a FG-GNN with belief propagation.
arXiv Detail & Related papers (2020-03-04T11:03:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.