Can Graph Neural Networks Count Substructures?
- URL: http://arxiv.org/abs/2002.04025v4
- Date: Wed, 28 Oct 2020 18:03:30 GMT
- Title: Can Graph Neural Networks Count Substructures?
- Authors: Zhengdao Chen, Lei Chen, Soledad Villar, Joan Bruna
- Abstract summary: We study the power of graph neural networks (GNNs) via their ability to count attributed graph substructures.
We distinguish between two types of substructure counting: inducedsubgraph-count and subgraphcount-count, and both positive and negative answers for popular GNN architectures.
- Score: 53.256112515435355
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ability to detect and count certain substructures in graphs is important
for solving many tasks on graph-structured data, especially in the contexts of
computational chemistry and biology as well as social network analysis.
Inspired by this, we propose to study the expressive power of graph neural
networks (GNNs) via their ability to count attributed graph substructures,
extending recent works that examine their power in graph isomorphism testing
and function approximation. We distinguish between two types of substructure
counting: induced-subgraph-count and subgraph-count, and establish both
positive and negative answers for popular GNN architectures. Specifically, we
prove that Message Passing Neural Networks (MPNNs), 2-Weisfeiler-Lehman (2-WL)
and 2-Invariant Graph Networks (2-IGNs) cannot perform induced-subgraph-count
of substructures consisting of 3 or more nodes, while they can perform
subgraph-count of star-shaped substructures. As an intermediary step, we prove
that 2-WL and 2-IGNs are equivalent in distinguishing non-isomorphic graphs,
partly answering an open problem raised in Maron et al. (2019). We also prove
positive results for k-WL and k-IGNs as well as negative results for k-WL with
a finite number of iterations. We then conduct experiments that support the
theoretical results for MPNNs and 2-IGNs. Moreover, motivated by substructure
counting and inspired by Murphy et al. (2019), we propose the Local Relational
Pooling model and demonstrate that it is not only effective for substructure
counting but also able to achieve competitive performance on molecular
prediction tasks.
Related papers
- Uplifting the Expressive Power of Graph Neural Networks through Graph
Partitioning [3.236774847052122]
We study the expressive power of graph neural networks through the lens of graph partitioning.
We introduce a novel GNN architecture, namely Graph Partitioning Neural Networks (GPNNs)
arXiv Detail & Related papers (2023-12-14T06:08:35Z) - Generalization Limits of Graph Neural Networks in Identity Effects
Learning [12.302336258860116]
Graph Neural Networks (GNNs) have emerged as a powerful tool for data-driven learning on various graph domains.
We establish new generalization properties and fundamental limits of GNNs in the context of learning so-called identity effects.
Our study is motivated by the need to understand the capabilities of GNNs when performing simple cognitive tasks.
arXiv Detail & Related papers (2023-06-30T20:56:38Z) - T2-GNN: Graph Neural Networks for Graphs with Incomplete Features and
Structure via Teacher-Student Distillation [65.43245616105052]
Graph Neural Networks (GNNs) have been a prevailing technique for tackling various analysis tasks on graph data.
In this paper, we propose a general GNN framework based on teacher-student distillation to improve the performance of GNNs on incomplete graphs.
arXiv Detail & Related papers (2022-12-24T13:49:44Z) - Representation Power of Graph Neural Networks: Improved Expressivity via
Algebraic Analysis [124.97061497512804]
We show that standard Graph Neural Networks (GNNs) produce more discriminative representations than the Weisfeiler-Lehman (WL) algorithm.
We also show that simple convolutional architectures with white inputs, produce equivariant features that count the closed paths in the graph.
arXiv Detail & Related papers (2022-05-19T18:40:25Z) - Theoretically Improving Graph Neural Networks via Anonymous Walk Graph
Kernels [25.736529232578178]
Graph neural networks (GNNs) have achieved tremendous success in graph mining.
MPGNNs, as the prevailing type of GNNs, have been theoretically shown unable to distinguish, detect or count many graph substructures.
We propose GSKN, a GNN model with a theoretically stronger ability to distinguish graph structures.
arXiv Detail & Related papers (2021-04-07T08:50:34Z) - Counting Substructures with Higher-Order Graph Neural Networks:
Possibility and Impossibility Results [58.277290855841976]
We study tradeoffs of computational cost and expressive power of Graph Neural Networks (GNNs)
We show that a new model can count subgraphs of size $k$, and thereby overcomes a known limitation of low-order GNNs.
In several cases, the proposed algorithm can greatly reduce computational complexity compared to the existing higher-order $k$-GNNs.
arXiv Detail & Related papers (2020-12-06T03:42:54Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Improving Graph Neural Network Expressivity via Subgraph Isomorphism
Counting [63.04999833264299]
"Graph Substructure Networks" (GSN) is a topologically-aware message passing scheme based on substructure encoding.
We show that it is strictly more expressive than the Weisfeiler-Leman (WL) graph isomorphism test.
We perform an extensive evaluation on graph classification and regression tasks and obtain state-of-the-art results in diverse real-world settings.
arXiv Detail & Related papers (2020-06-16T15:30:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.