How does over-squashing affect the power of GNNs?
- URL: http://arxiv.org/abs/2306.03589v3
- Date: Mon, 12 Feb 2024 19:24:06 GMT
- Title: How does over-squashing affect the power of GNNs?
- Authors: Francesco Di Giovanni, T. Konstantin Rusch, Michael M. Bronstein,
Andreea Deac, Marc Lackenby, Siddhartha Mishra, Petar Veli\v{c}kovi\'c
- Abstract summary: Graph Neural Networks (GNNs) are the state-of-the-art model for machine learning on graph-structured data.
We provide a rigorous analysis to determine which function classes of node features can be learned by an MPNN of a given capacity.
We prove that, to guarantee sufficient communication between pairs of nodes, the capacity of the MPNN must be large enough.
- Score: 39.52168593457813
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) are the state-of-the-art model for machine
learning on graph-structured data. The most popular class of GNNs operate by
exchanging information between adjacent nodes, and are known as Message Passing
Neural Networks (MPNNs). Given their widespread use, understanding the
expressive power of MPNNs is a key question. However, existing results
typically consider settings with uninformative node features. In this paper, we
provide a rigorous analysis to determine which function classes of node
features can be learned by an MPNN of a given capacity. We do so by measuring
the level of pairwise interactions between nodes that MPNNs allow for. This
measure provides a novel quantitative characterization of the so-called
over-squashing effect, which is observed to occur when a large volume of
messages is aggregated into fixed-size vectors. Using our measure, we prove
that, to guarantee sufficient communication between pairs of nodes, the
capacity of the MPNN must be large enough, depending on properties of the input
graph structure, such as commute times. For many relevant scenarios, our
analysis results in impossibility statements in practice, showing that
over-squashing hinders the expressive power of MPNNs. We validate our
theoretical findings through extensive controlled experiments and ablation
studies.
Related papers
- Sign is Not a Remedy: Multiset-to-Multiset Message Passing for Learning on Heterophilic Graphs [77.42221150848535]
We propose a novel message passing function called Multiset to Multiset GNN(M2M-GNN)
Our theoretical analyses and extensive experiments demonstrate that M2M-GNN effectively alleviates the aforementioned limitations of SMP, yielding superior performance in comparison.
arXiv Detail & Related papers (2024-05-31T07:39:22Z) - Probabilistic Graph Rewiring via Virtual Nodes [21.273828055299408]
Message-passing graph neural networks (MPNNs) have emerged as a powerful paradigm for graph-based machine learning.
MPNNs face challenges such as under-reaching and over-squashing, where limited receptive fields and structural bottlenecks hinder information flow in the graph.
Here, we propose implicitly rewired message-passing neural networks (IPR-MPNNs), a novel approach that integrates implicit probabilistic graph rewiring into MPNNs.
arXiv Detail & Related papers (2024-05-27T16:11:49Z) - Information Flow in Graph Neural Networks: A Clinical Triage Use Case [49.86931948849343]
Graph Neural Networks (GNNs) have gained popularity in healthcare and other domains due to their ability to process multi-modal and multi-relational graphs.
We investigate how the flow of embedding information within GNNs affects the prediction of links in Knowledge Graphs (KGs)
Our results demonstrate that incorporating domain knowledge into the GNN connectivity leads to better performance than using the same connectivity as the KG or allowing unconstrained embedding propagation.
arXiv Detail & Related papers (2023-09-12T09:18:12Z) - Graph Neural Networks are Inherently Good Generalizers: Insights by
Bridging GNNs and MLPs [71.93227401463199]
This paper pinpoints the major source of GNNs' performance gain to their intrinsic capability, by introducing an intermediate model class dubbed as P(ropagational)MLP.
We observe that PMLPs consistently perform on par with (or even exceed) their GNN counterparts, while being much more efficient in training.
arXiv Detail & Related papers (2022-12-18T08:17:32Z) - Boosting the Cycle Counting Power of Graph Neural Networks with
I$^2$-GNNs [9.806910643086043]
We propose I$2$-GNNs to extend Subgraph MPNNs by assigning different identifiers for the root node and its neighbors in each subgraph.
I$2$-GNNs' discriminative power is shown to be strictly stronger than Subgraph MPNNs and partially stronger than the 3-WL test.
To the best of our knowledge, it is the first linear-time GNN model that can count 6-cycles with theoretical guarantees.
arXiv Detail & Related papers (2022-10-22T09:40:00Z) - Discovering the Representation Bottleneck of Graph Neural Networks from
Multi-order Interactions [51.597480162777074]
Graph neural networks (GNNs) rely on the message passing paradigm to propagate node features and build interactions.
Recent works point out that different graph learning tasks require different ranges of interactions between nodes.
We study two common graph construction methods in scientific domains, i.e., emphK-nearest neighbor (KNN) graphs and emphfully-connected (FC) graphs.
arXiv Detail & Related papers (2022-05-15T11:38:14Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - How hard is to distinguish graphs with graph neural networks? [32.09819774228997]
This study derives hardness results for the classification variant of graph isomorphism in the message-passing model (MPNN)
MPNN encompasses the majority of graph neural networks used today and is universal when nodes are given unique features.
An empirical study involving 12 graph classification tasks and 420 networks reveals strong alignment between actual performance and theoretical predictions.
arXiv Detail & Related papers (2020-05-13T22:28:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.