Exponentially Improving the Complexity of Simulating the
Weisfeiler-Lehman Test with Graph Neural Networks
- URL: http://arxiv.org/abs/2211.03232v1
- Date: Sun, 6 Nov 2022 22:38:49 GMT
- Title: Exponentially Improving the Complexity of Simulating the
Weisfeiler-Lehman Test with Graph Neural Networks
- Authors: Anders Aamand, Justin Y. Chen, Piotr Indyk, Shyam Narayanan, Ronitt
Rubinfeld, Nicholas Schiefer, Sandeep Silwal, Tal Wagner
- Abstract summary: We show that the expressive power of Graph Neural Networks (GNNs) in distinguishing non-isomorphic graphs is exactly the same as that of the Weisfeiler-Lehman graph test.
We present an improved simulation of the WL test on GNNs with emphexponentially lower complexity.
- Score: 22.36443060418036
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent work shows that the expressive power of Graph Neural Networks (GNNs)
in distinguishing non-isomorphic graphs is exactly the same as that of the
Weisfeiler-Lehman (WL) graph test. In particular, they show that the WL test
can be simulated by GNNs. However, those simulations involve neural networks
for the 'combine' function of size polynomial or even exponential in the number
of graph nodes $n$, as well as feature vectors of length linear in $n$.
We present an improved simulation of the WL test on GNNs with
\emph{exponentially} lower complexity. In particular, the neural network
implementing the combine function in each node has only a polylogarithmic
number of parameters in $n$, and the feature vectors exchanged by the nodes of
GNN consists of only $O(\log n)$ bits. We also give logarithmic lower bounds
for the feature vector length and the size of the neural networks, showing the
(near)-optimality of our construction.
Related papers
- Non-convolutional Graph Neural Networks [46.79328529882998]
We design a simple graph learning module entirely free of convolution operators, coined random walk with unifying memory (RUM) neural network.
RUM achieves competitive performance, but is also robust, memory-efficient, scalable, and faster than the simplest convolutional GNNs.
arXiv Detail & Related papers (2024-07-31T21:29:26Z) - Spatio-Spectral Graph Neural Networks [50.277959544420455]
We propose Spatio-Spectral Graph Networks (S$2$GNNs)
S$2$GNNs combine spatially and spectrally parametrized graph filters.
We show that S$2$GNNs vanquish over-squashing and yield strictly tighter approximation-theoretic error bounds than MPGNNs.
arXiv Detail & Related papers (2024-05-29T14:28:08Z) - Degree-based stratification of nodes in Graph Neural Networks [66.17149106033126]
We modify the Graph Neural Network (GNN) architecture so that the weight matrices are learned, separately, for the nodes in each group.
This simple-to-implement modification seems to improve performance across datasets and GNN methods.
arXiv Detail & Related papers (2023-12-16T14:09:23Z) - Seq-HGNN: Learning Sequential Node Representation on Heterogeneous Graph [57.2953563124339]
We propose a novel heterogeneous graph neural network with sequential node representation, namely Seq-HGNN.
We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) and Open Graph Benchmark (OGB)
arXiv Detail & Related papers (2023-05-18T07:27:18Z) - Higher-order Sparse Convolutions in Graph Neural Networks [17.647346486710514]
We introduce a new higher-order sparse convolution based on the Sobolev norm of graph signals.
S-SobGNN shows competitive performance in all applications as compared to several state-of-the-art methods.
arXiv Detail & Related papers (2023-02-21T08:08:18Z) - From Local to Global: Spectral-Inspired Graph Neural Networks [28.858773653743075]
Graph Neural Networks (GNNs) are powerful deep learning methods for Non-Euclidean data.
MPNNs are message-passing algorithms that aggregate and combine signals in a local graph neighborhood.
MPNNs can suffer from issues like over-smoothing or over-squashing.
arXiv Detail & Related papers (2022-09-24T17:19:00Z) - Graph Neural Networks with Local Graph Parameters [1.8600631687568656]
Local graph parameters can be added to any Graph Neural Networks (GNNs) architecture.
Our results connect GNNs with deep results in finite model theory and finite variable logics.
arXiv Detail & Related papers (2021-06-12T07:43:51Z) - Breaking the Limits of Message Passing Graph Neural Networks [6.175401630947573]
Graph Neural Networks (MPNNs) have a linear complexity with respect to the number of nodes when applied to sparse graphs.
In this paper, we show that if the graph convolution supports are designed in spectral-domain by a non-linear custom function of eigenvalues, the MPNN is theoretically more powerful than the 1-WL test.
arXiv Detail & Related papers (2021-06-08T13:26:56Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - Graph Neural Networks: Architectures, Stability and Transferability [176.3960927323358]
Graph Neural Networks (GNNs) are information processing architectures for signals supported on graphs.
They are generalizations of convolutional neural networks (CNNs) in which individual layers contain banks of graph convolutional filters.
arXiv Detail & Related papers (2020-08-04T18:57:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.