Some Might Say All You Need Is Sum
- URL: http://arxiv.org/abs/2302.11603v2
- Date: Fri, 19 May 2023 10:34:37 GMT
- Title: Some Might Say All You Need Is Sum
- Authors: Eran Rosenbluth, Jan Toenshoff, Martin Grohe
- Abstract summary: The expressivity of Graph Neural Networks (GNNs) is dependent on the aggregation functions they employ.
We prove that basic functions, which can be computed exactly by Mean or Max GNNs, are inapproximable by any Sum GNN.
- Score: 2.226803104060345
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The expressivity of Graph Neural Networks (GNNs) is dependent on the
aggregation functions they employ. Theoretical works have pointed towards Sum
aggregation GNNs subsuming every other GNNs, while certain practical works have
observed a clear advantage to using Mean and Max. An examination of the
theoretical guarantee identifies two caveats. First, it is size-restricted,
that is, the power of every specific GNN is limited to graphs of a specific
size. Successfully processing larger graphs may require an other GNN, and so
on. Second, it concerns the power to distinguish non-isomorphic graphs, not the
power to approximate general functions on graphs, and the former does not
necessarily imply the latter.
It is desired that a GNN's usability will not be limited to graphs of any
specific size. Therefore, we explore the realm of unrestricted-size
expressivity. We prove that basic functions, which can be computed exactly by
Mean or Max GNNs, are inapproximable by any Sum GNN. We prove that under
certain restrictions, every Mean or Max GNN can be approximated by a Sum GNN,
but even there, a combination of (Sum, [Mean/Max]) is more expressive than Sum
alone. Lastly, we prove further expressivity limitations for GNNs with a broad
class of aggregations.
Related papers
- Is uniform expressivity too restrictive? Towards efficient expressivity of graph neural networks [0.6445605125467574]
Graph Neural Network (GNN) can express a query without the parameters depending on the size of the input graphs.
We show that many GNNs can still efficiently express GC2 queries in a way that the number of parameters remains logarithmic on the maximal degree of the input graphs.
arXiv Detail & Related papers (2024-10-02T18:09:12Z) - A Manifold Perspective on the Statistical Generalization of Graph Neural Networks [84.01980526069075]
We take a manifold perspective to establish the statistical generalization theory of GNNs on graphs sampled from a manifold in the spectral domain.
We prove that the generalization bounds of GNNs decrease linearly with the size of the graphs in the logarithmic scale, and increase linearly with the spectral continuity constants of the filter functions.
arXiv Detail & Related papers (2024-06-07T19:25:02Z) - Spatio-Spectral Graph Neural Networks [50.277959544420455]
We propose Spatio-Spectral Graph Networks (S$2$GNNs)
S$2$GNNs combine spatially and spectrally parametrized graph filters.
We show that S$2$GNNs vanquish over-squashing and yield strictly tighter approximation-theoretic error bounds than MPGNNs.
arXiv Detail & Related papers (2024-05-29T14:28:08Z) - The logic of rational graph neural networks [0.7614628596146602]
We prove that some GC2 queries of depth $3$ cannot be expressed by GNNs with any rational activation function.
This shows that not all non-polynomial activation functions confer GNNs maximal expressivity.
We also present a rational subfragment of the first order logic (RGC2), and prove that rational GNNs can express RGC2 queries uniformly over all graphs.
arXiv Detail & Related papers (2023-10-19T20:32:25Z) - Distance-Restricted Folklore Weisfeiler-Leman GNNs with Provable Cycle
Counting Power [27.929167547127207]
Ability of graph neural networks (GNNs) to count certain graph substructures, especially cycles, is important.
We propose a novel class of GNNs -- $d$-Distance-Restricted FWL(2) GNNs.
Our model is the most efficient GNN model to date that can count up to 6-cycles.
arXiv Detail & Related papers (2023-09-10T06:13:29Z) - On the Correspondence Between Monotonic Max-Sum GNNs and Datalog [19.288835943223816]
We study data transformations based on graph neural networks (GNNs)
We study the expressivity of monotonic max-sum GNNs, which cover a subclass of GNNs with max and sum aggregation functions.
arXiv Detail & Related papers (2023-05-29T11:13:38Z) - On the approximation capability of GNNs in node
classification/regression tasks [4.141514895639094]
Graph Neural Networks (GNNs) are a broad class of connectionist models for graph processing.
We show that GNNs are universal approximators in probability for node classification/regression tasks.
arXiv Detail & Related papers (2021-06-16T17:46:51Z) - A Unified Lottery Ticket Hypothesis for Graph Neural Networks [82.31087406264437]
We present a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights.
We further generalize the popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network.
arXiv Detail & Related papers (2021-02-12T21:52:43Z) - The Surprising Power of Graph Neural Networks with Random Node
Initialization [54.4101931234922]
Graph neural networks (GNNs) are effective models for representation learning on relational data.
Standard GNNs are limited in their expressive power, as they cannot distinguish beyond the capability of the Weisfeiler-Leman graph isomorphism.
In this work, we analyze the expressive power of GNNs with random node (RNI)
We prove that these models are universal, a first such result for GNNs not relying on computationally demanding higher-order properties.
arXiv Detail & Related papers (2020-10-02T19:53:05Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Eigen-GNN: A Graph Structure Preserving Plug-in for GNNs [95.63153473559865]
Graph Neural Networks (GNNs) are emerging machine learning models on graphs.
Most existing GNN models in practice are shallow and essentially feature-centric.
We show empirically and analytically that the existing shallow GNNs cannot preserve graph structures well.
We propose Eigen-GNN, a plug-in module to boost GNNs ability in preserving graph structures.
arXiv Detail & Related papers (2020-06-08T02:47:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.