Fine-grained Expressivity of Graph Neural Networks
- URL: http://arxiv.org/abs/2306.03698v2
- Date: Thu, 2 Nov 2023 13:37:35 GMT
- Title: Fine-grained Expressivity of Graph Neural Networks
- Authors: Jan B\"oker, Ron Levie, Ningyuan Huang, Soledad Villar, Christopher
Morris
- Abstract summary: We consider continuous extensions of both $1$-WL and MPNNs to graphons.
We show that the continuous variant of $1$-WL delivers an accurate topological characterization of the expressive power of MPNNs on graphons.
We also evaluate different MPNN architectures based on their ability to preserve graph distances.
- Score: 15.766353435658043
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Numerous recent works have analyzed the expressive power of message-passing
graph neural networks (MPNNs), primarily utilizing combinatorial techniques
such as the $1$-dimensional Weisfeiler-Leman test ($1$-WL) for the graph
isomorphism problem. However, the graph isomorphism objective is inherently
binary, not giving insights into the degree of similarity between two given
graphs. This work resolves this issue by considering continuous extensions of
both $1$-WL and MPNNs to graphons. Concretely, we show that the continuous
variant of $1$-WL delivers an accurate topological characterization of the
expressive power of MPNNs on graphons, revealing which graphs these networks
can distinguish and the level of difficulty in separating them. We identify the
finest topology where MPNNs separate points and prove a universal approximation
theorem. Consequently, we provide a theoretical framework for graph and graphon
similarity combining various topological variants of classical
characterizations of the $1$-WL. In particular, we characterize the expressive
power of MPNNs in terms of the tree distance, which is a graph distance based
on the concept of fractional isomorphisms, and substructure counts via tree
homomorphisms, showing that these concepts have the same expressive power as
the $1$-WL and MPNNs on graphons. Empirically, we validate our theoretical
findings by showing that randomly initialized MPNNs, without training, exhibit
competitive performance compared to their trained counterparts. Moreover, we
evaluate different MPNN architectures based on their ability to preserve graph
distances, highlighting the significance of our continuous $1$-WL test in
understanding MPNNs' expressivity.
Related papers
- Weisfeiler-Leman at the margin: When more expressivity matters [10.192184857243666]
We show that an architecture's expressivity offers limited insights into its generalization performance when viewed through graph isomorphism.
We introduce variations of expressive $1$-WL-based kernel and MPNN architectures with provable generalization properties.
arXiv Detail & Related papers (2024-02-12T11:03:52Z) - Uplifting the Expressive Power of Graph Neural Networks through Graph
Partitioning [3.236774847052122]
We study the expressive power of graph neural networks through the lens of graph partitioning.
We introduce a novel GNN architecture, namely Graph Partitioning Neural Networks (GPNNs)
arXiv Detail & Related papers (2023-12-14T06:08:35Z) - Generalization Limits of Graph Neural Networks in Identity Effects
Learning [12.302336258860116]
Graph Neural Networks (GNNs) have emerged as a powerful tool for data-driven learning on various graph domains.
We establish new generalization properties and fundamental limits of GNNs in the context of learning so-called identity effects.
Our study is motivated by the need to understand the capabilities of GNNs when performing simple cognitive tasks.
arXiv Detail & Related papers (2023-06-30T20:56:38Z) - Representation Power of Graph Neural Networks: Improved Expressivity via
Algebraic Analysis [124.97061497512804]
We show that standard Graph Neural Networks (GNNs) produce more discriminative representations than the Weisfeiler-Lehman (WL) algorithm.
We also show that simple convolutional architectures with white inputs, produce equivariant features that count the closed paths in the graph.
arXiv Detail & Related papers (2022-05-19T18:40:25Z) - Discovering the Representation Bottleneck of Graph Neural Networks from
Multi-order Interactions [51.597480162777074]
Graph neural networks (GNNs) rely on the message passing paradigm to propagate node features and build interactions.
Recent works point out that different graph learning tasks require different ranges of interactions between nodes.
We study two common graph construction methods in scientific domains, i.e., emphK-nearest neighbor (KNN) graphs and emphfully-connected (FC) graphs.
arXiv Detail & Related papers (2022-05-15T11:38:14Z) - On Graph Neural Networks versus Graph-Augmented MLPs [51.23890789522705]
Graph-Augmented Multi-Layer Perceptrons (GA-MLPs) first augments node features with certain multi-hop operators on the graph.
We prove a separation in expressive power between GA-MLPs and GNNs that grows exponentially in depth.
arXiv Detail & Related papers (2020-10-28T17:59:59Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Walk Message Passing Neural Networks and Second-Order Graph Neural
Networks [4.355567556995855]
We introduce a new type of MPNN, $ell$-walk MPNNs, which aggregate features along walks of length $ell$ between vertices.
We show that $2$-walk MPNNs match 2-WL in expressive power.
In particular, to match W[$ell$] in expressive power, we allow $ell-1$ matrix multiplications in each layer.
arXiv Detail & Related papers (2020-06-16T20:24:01Z) - Improving Graph Neural Network Expressivity via Subgraph Isomorphism
Counting [63.04999833264299]
"Graph Substructure Networks" (GSN) is a topologically-aware message passing scheme based on substructure encoding.
We show that it is strictly more expressive than the Weisfeiler-Leman (WL) graph isomorphism test.
We perform an extensive evaluation on graph classification and regression tasks and obtain state-of-the-art results in diverse real-world settings.
arXiv Detail & Related papers (2020-06-16T15:30:31Z) - How hard is to distinguish graphs with graph neural networks? [32.09819774228997]
This study derives hardness results for the classification variant of graph isomorphism in the message-passing model (MPNN)
MPNN encompasses the majority of graph neural networks used today and is universal when nodes are given unique features.
An empirical study involving 12 graph classification tasks and 420 networks reveals strong alignment between actual performance and theoretical predictions.
arXiv Detail & Related papers (2020-05-13T22:28:46Z) - Can Graph Neural Networks Count Substructures? [53.256112515435355]
We study the power of graph neural networks (GNNs) via their ability to count attributed graph substructures.
We distinguish between two types of substructure counting: inducedsubgraph-count and subgraphcount-count, and both positive and negative answers for popular GNN architectures.
arXiv Detail & Related papers (2020-02-10T18:53:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.