Path Neural Networks: Expressive and Accurate Graph Neural Networks
- URL: http://arxiv.org/abs/2306.05955v1
- Date: Fri, 9 Jun 2023 15:11:49 GMT
- Title: Path Neural Networks: Expressive and Accurate Graph Neural Networks
- Authors: Gaspard Michel, Giannis Nikolentzos, Johannes Lutzeyer, Michalis
Vazirgiannis
- Abstract summary: Graph neural networks (GNNs) have recently become the standard approach for learning with graph-structured data.
In this paper, we propose Path Neural Networks (PathNNs), a model that updates node representations by aggregating paths emanating from nodes.
We prove that two of these variants are strictly more powerful than the 1-WL algorithm, and we experimentally validate our theoretical results.
- Score: 23.824156607376697
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) have recently become the standard approach for
learning with graph-structured data. Prior work has shed light into their
potential, but also their limitations. Unfortunately, it was shown that
standard GNNs are limited in their expressive power. These models are no more
powerful than the 1-dimensional Weisfeiler-Leman (1-WL) algorithm in terms of
distinguishing non-isomorphic graphs. In this paper, we propose Path Neural
Networks (PathNNs), a model that updates node representations by aggregating
paths emanating from nodes. We derive three different variants of the PathNN
model that aggregate single shortest paths, all shortest paths and all simple
paths of length up to K. We prove that two of these variants are strictly more
powerful than the 1-WL algorithm, and we experimentally validate our
theoretical results. We find that PathNNs can distinguish pairs of
non-isomorphic graphs that are indistinguishable by 1-WL, while our most
expressive PathNN variant can even distinguish between 3-WL indistinguishable
graphs. The different PathNN variants are also evaluated on graph
classification and graph regression datasets, where in most cases, they
outperform the baseline methods.
Related papers
- The Heterophilic Snowflake Hypothesis: Training and Empowering GNNs for Heterophilic Graphs [59.03660013787925]
We introduce the Heterophily Snowflake Hypothesis and provide an effective solution to guide and facilitate research on heterophilic graphs.
Our observations show that our framework acts as a versatile operator for diverse tasks.
It can be integrated into various GNN frameworks, boosting performance in-depth and offering an explainable approach to choosing the optimal network depth.
arXiv Detail & Related papers (2024-06-18T12:16:00Z) - Spatio-Spectral Graph Neural Networks [50.277959544420455]
We propose Spatio-Spectral Graph Networks (S$2$GNNs)
S$2$GNNs combine spatially and spectrally parametrized graph filters.
We show that S$2$GNNs vanquish over-squashing and yield strictly tighter approximation-theoretic error bounds than MPGNNs.
arXiv Detail & Related papers (2024-05-29T14:28:08Z) - Degree-based stratification of nodes in Graph Neural Networks [66.17149106033126]
We modify the Graph Neural Network (GNN) architecture so that the weight matrices are learned, separately, for the nodes in each group.
This simple-to-implement modification seems to improve performance across datasets and GNN methods.
arXiv Detail & Related papers (2023-12-16T14:09:23Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - A Topological characterisation of Weisfeiler-Leman equivalence classes [0.0]
Graph Neural Networks (GNNs) are learning models aimed at processing graphs and signals on graphs.
In this article, we rely on the theory of covering spaces to fully characterize the classes of graphs that GNNs cannot distinguish.
We show that the number of indistinguishable graphs in our dataset grows super-exponentially with the number of nodes.
arXiv Detail & Related papers (2022-06-23T17:28:55Z) - The Exact Class of Graph Functions Generated by Graph Neural Networks [43.25172578943894]
Graph Neural Network (GNN) whose output is identical to the graph function?
In this paper, we fully answer this question and characterize the class of graph problems that can be represented by GNNs.
We show that this condition can be efficiently verified by checking quadratically many constraints.
arXiv Detail & Related papers (2022-02-17T18:54:27Z) - On the Universality of Graph Neural Networks on Large Random Graphs [22.720758067273195]
We study the approximation power of Graph Neural Networks (GNNs) on latent position random graphs.
In the large graph limit, GNNs are known to converge to certain "continuous" models known as c-GNNs.
We show that c-SGNNs are strictly more powerful than c-GNNs in the continuous limit.
arXiv Detail & Related papers (2021-05-27T12:52:36Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Expressive Power of Invariant and Equivariant Graph Neural Networks [10.419350129060598]
We show that Folklore Graph Neural Networks (FGNN) are the most expressive architectures proposed so far for a given tensor order.
FGNNs are able to learn how to solve the problem, leading to much better average performances than existing algorithms.
arXiv Detail & Related papers (2020-06-28T16:35:45Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.