The Descriptive Complexity of Graph Neural Networks
- URL: http://arxiv.org/abs/2303.04613v5
- Date: Tue, 03 Dec 2024 11:48:24 GMT
- Title: The Descriptive Complexity of Graph Neural Networks
- Authors: Martin Grohe,
- Abstract summary: We show that graph queries can be computed by a bounded-size family of graph neural networks (GNNs)
GNNs are definable in the guarded fragment of first-order logic with counting and with built-in relations.
We show that queries computable by a single GNN with piecewise linear activations and rational weights are definable in GFO+C without built-in relations.
- Score: 2.6728900378310514
- License:
- Abstract: We analyse the power of graph neural networks (GNNs) in terms of Boolean circuit complexity and descriptive complexity. We prove that the graph queries that can be computed by a polynomial-size bounded-depth family of GNNs are exactly those definable in the guarded fragment GFO+C of first-order logic with counting and with built-in relations. This puts GNNs in the circuit complexity class (non-uniform) $\text{TC}^0$. Remarkably, the GNN families may use arbitrary real weights and a wide class of activation functions that includes the standard ReLU, logistic "sigmoid", and hyperbolic tangent functions. If the GNNs are allowed to use random initialisation and global readout (both standard features of GNNs widely used in practice), they can compute exactly the same queries as bounded depth Boolean circuits with threshold gates, that is, exactly the queries in $\text{TC}^0$. Moreover, we show that queries computable by a single GNN with piecewise linear activations and rational weights are definable in GFO+C without built-in relations. Therefore, they are contained in uniform $\text{TC}^0$.
Related papers
- On the Expressive Power of Subgraph Graph Neural Networks for Graphs with Bounded Cycles [17.29046077604317]
This work investigates $k$-hop subgraph GNNs that aggregate information from neighbors with distances up to $k$.
We prove that the $k$-hop subgraph GNNs can approximate any permutation-invariant/equivariant continuous function over graphs without cycles of length greater than $2k+1$ within any error tolerance.
arXiv Detail & Related papers (2025-02-06T01:25:22Z) - Spatio-Spectral Graph Neural Networks [50.277959544420455]
We propose Spatio-Spectral Graph Networks (S$2$GNNs)
S$2$GNNs combine spatially and spectrally parametrized graph filters.
We show that S$2$GNNs vanquish over-squashing and yield strictly tighter approximation-theoretic error bounds than MPGNNs.
arXiv Detail & Related papers (2024-05-29T14:28:08Z) - The logic of rational graph neural networks [0.7614628596146602]
We prove that some GC2 queries of depth $3$ cannot be expressed by GNNs with any rational activation function.
This shows that not all non-polynomial activation functions confer GNNs maximal expressivity.
We also present a rational subfragment of the first order logic (RGC2), and prove that rational GNNs can express RGC2 queries uniformly over all graphs.
arXiv Detail & Related papers (2023-10-19T20:32:25Z) - On the Correspondence Between Monotonic Max-Sum GNNs and Datalog [19.288835943223816]
We study data transformations based on graph neural networks (GNNs)
We study the expressivity of monotonic max-sum GNNs, which cover a subclass of GNNs with max and sum aggregation functions.
arXiv Detail & Related papers (2023-05-29T11:13:38Z) - Logical Message Passing Networks with One-hop Inference on Atomic
Formulas [57.47174363091452]
We propose a framework for complex query answering that decomposes the Knowledge Graph embeddings from neural set operators.
On top of the query graph, we propose the Logical Message Passing Neural Network (LMPNN) that connects the local one-hop inferences on atomic formulas to the global logical reasoning.
Our approach yields the new state-of-the-art neural CQA model.
arXiv Detail & Related papers (2023-01-21T02:34:06Z) - Exponentially Improving the Complexity of Simulating the
Weisfeiler-Lehman Test with Graph Neural Networks [22.36443060418036]
We show that the expressive power of Graph Neural Networks (GNNs) in distinguishing non-isomorphic graphs is exactly the same as that of the Weisfeiler-Lehman graph test.
We present an improved simulation of the WL test on GNNs with emphexponentially lower complexity.
arXiv Detail & Related papers (2022-11-06T22:38:49Z) - Counting Substructures with Higher-Order Graph Neural Networks:
Possibility and Impossibility Results [58.277290855841976]
We study tradeoffs of computational cost and expressive power of Graph Neural Networks (GNNs)
We show that a new model can count subgraphs of size $k$, and thereby overcomes a known limitation of low-order GNNs.
In several cases, the proposed algorithm can greatly reduce computational complexity compared to the existing higher-order $k$-GNNs.
arXiv Detail & Related papers (2020-12-06T03:42:54Z) - The Surprising Power of Graph Neural Networks with Random Node
Initialization [54.4101931234922]
Graph neural networks (GNNs) are effective models for representation learning on relational data.
Standard GNNs are limited in their expressive power, as they cannot distinguish beyond the capability of the Weisfeiler-Leman graph isomorphism.
In this work, we analyze the expressive power of GNNs with random node (RNI)
We prove that these models are universal, a first such result for GNNs not relying on computationally demanding higher-order properties.
arXiv Detail & Related papers (2020-10-02T19:53:05Z) - Graph Neural Networks: Architectures, Stability and Transferability [176.3960927323358]
Graph Neural Networks (GNNs) are information processing architectures for signals supported on graphs.
They are generalizations of convolutional neural networks (CNNs) in which individual layers contain banks of graph convolutional filters.
arXiv Detail & Related papers (2020-08-04T18:57:36Z) - Expressive Power of Invariant and Equivariant Graph Neural Networks [10.419350129060598]
We show that Folklore Graph Neural Networks (FGNN) are the most expressive architectures proposed so far for a given tensor order.
FGNNs are able to learn how to solve the problem, leading to much better average performances than existing algorithms.
arXiv Detail & Related papers (2020-06-28T16:35:45Z) - Optimization and Generalization Analysis of Transduction through
Gradient Boosting and Application to Multi-scale Graph Neural Networks [60.22494363676747]
It is known that the current graph neural networks (GNNs) are difficult to make themselves deep due to the problem known as over-smoothing.
Multi-scale GNNs are a promising approach for mitigating the over-smoothing problem.
We derive the optimization and generalization guarantees of transductive learning algorithms that include multi-scale GNNs.
arXiv Detail & Related papers (2020-06-15T17:06:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.