Equivariant Subgraph Aggregation Networks
- URL: http://arxiv.org/abs/2110.02910v1
- Date: Wed, 6 Oct 2021 16:45:07 GMT
- Title: Equivariant Subgraph Aggregation Networks
- Authors: Beatrice Bevilacqua, Fabrizio Frasca, Derek Lim, Balasubramaniam
Srinivasan, Chen Cai, Gopinath Balamurugan, Michael M. Bronstein, Haggai
Maron
- Abstract summary: This paper proposes a novel framework called Equivariant Subgraph Aggregation Networks (ESAN) to address this issue.
While two graphs may not be distinguishable by an MPNN, they often contain distinguishable subgraphs.
We develop novel variants of the 1-dimensional Weisfeiler-Leman (1-WL) test for graph isomorphism, and prove lower bounds on the expressiveness of ESAN.
We provide theoretical results that describe how design choices such as the subgraph selection policy and equivariant neural architecture affect our architecture's expressive power.
- Score: 23.26140936226352
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Message-passing neural networks (MPNNs) are the leading architecture for deep
learning on graph-structured data, in large part due to their simplicity and
scalability. Unfortunately, it was shown that these architectures are limited
in their expressive power. This paper proposes a novel framework called
Equivariant Subgraph Aggregation Networks (ESAN) to address this issue. Our
main observation is that while two graphs may not be distinguishable by an
MPNN, they often contain distinguishable subgraphs. Thus, we propose to
represent each graph as a set of subgraphs derived by some predefined policy,
and to process it using a suitable equivariant architecture. We develop novel
variants of the 1-dimensional Weisfeiler-Leman (1-WL) test for graph
isomorphism, and prove lower bounds on the expressiveness of ESAN in terms of
these new WL variants. We further prove that our approach increases the
expressive power of both MPNNs and more expressive architectures. Moreover, we
provide theoretical results that describe how design choices such as the
subgraph selection policy and equivariant neural architecture affect our
architecture's expressive power. To deal with the increased computational cost,
we propose a subgraph sampling scheme, which can be viewed as a stochastic
version of our framework. A comprehensive set of experiments on real and
synthetic datasets demonstrates that our framework improves the expressive
power and overall performance of popular GNN architectures.
Related papers
- From Hypergraph Energy Functions to Hypergraph Neural Networks [94.88564151540459]
We present an expressive family of parameterized, hypergraph-regularized energy functions.
We then demonstrate how minimizers of these energies effectively serve as node embeddings.
We draw parallels between the proposed bilevel hypergraph optimization, and existing GNN architectures in common use.
arXiv Detail & Related papers (2023-06-16T04:40:59Z) - Combining Stochastic Explainers and Subgraph Neural Networks can
Increase Expressivity and Interpretability [12.526174412246107]
Subgraph-enhanced graph neural networks (SGNN) can increase the power of the standard message-passing framework.
We introduce a novel framework that jointly predicts the class of the graph and a set of explanatory sparse subgraphs.
arXiv Detail & Related papers (2023-04-14T14:21:20Z) - A Complete Expressiveness Hierarchy for Subgraph GNNs via Subgraph
Weisfeiler-Lehman Tests [30.558563815430418]
Subgraph neural networks (GNNs) have emerged as an important direction for developing expressive graph neural networks (GNNs)
This paper conducts a systematic study of general node-based subgraph GNNs through the lens of Subgraph Weisfeiler-Lehman Tests (SWL)
We prove that any node-based subgraph GNN falls into one of the six SWL equivalence classes, among which $mathsfSSWL$ achieves the maximal expressive power.
arXiv Detail & Related papers (2023-02-14T14:42:54Z) - Ordered Subgraph Aggregation Networks [19.18478955240166]
Subgraph-enhanced graph neural networks (GNNs) have emerged, provably boosting the expressive power of standard (message-passing) GNNs.
Here, we introduce a theoretical framework and extend the known expressivity results of subgraph-enhanced GNNs.
We show that increasing subgraph size always increases the expressive power and develop a better understanding of their limitations.
arXiv Detail & Related papers (2022-06-22T15:19:34Z) - Representation Power of Graph Neural Networks: Improved Expressivity via
Algebraic Analysis [124.97061497512804]
We show that standard Graph Neural Networks (GNNs) produce more discriminative representations than the Weisfeiler-Lehman (WL) algorithm.
We also show that simple convolutional architectures with white inputs, produce equivariant features that count the closed paths in the graph.
arXiv Detail & Related papers (2022-05-19T18:40:25Z) - Structure-Aware Transformer for Graph Representation Learning [7.4124458942877105]
We show that node representations generated by the Transformer with positional encoding do not necessarily capture structural similarity between them.
We propose the Structure-Aware Transformer, a class of simple and flexible graph transformers built upon a new self-attention mechanism.
Our framework can leverage any existing GNN to extract the subgraph representation, and we show that it systematically improves performance relative to the base GNN model.
arXiv Detail & Related papers (2022-02-07T09:53:39Z) - Frame Averaging for Invariant and Equivariant Network Design [50.87023773850824]
We introduce Frame Averaging (FA), a framework for adapting known (backbone) architectures to become invariant or equivariant to new symmetry types.
We show that FA-based models have maximal expressive power in a broad setting.
We propose a new class of universal Graph Neural Networks (GNNs), universal Euclidean motion invariant point cloud networks, and Euclidean motion invariant Message Passing (MP) GNNs.
arXiv Detail & Related papers (2021-10-07T11:05:23Z) - Counting Substructures with Higher-Order Graph Neural Networks:
Possibility and Impossibility Results [58.277290855841976]
We study tradeoffs of computational cost and expressive power of Graph Neural Networks (GNNs)
We show that a new model can count subgraphs of size $k$, and thereby overcomes a known limitation of low-order GNNs.
In several cases, the proposed algorithm can greatly reduce computational complexity compared to the existing higher-order $k$-GNNs.
arXiv Detail & Related papers (2020-12-06T03:42:54Z) - Building powerful and equivariant graph neural networks with structural
message-passing [74.93169425144755]
We propose a powerful and equivariant message-passing framework based on two ideas.
First, we propagate a one-hot encoding of the nodes, in addition to the features, in order to learn a local context matrix around each node.
Second, we propose methods for the parametrization of the message and update functions that ensure permutation equivariance.
arXiv Detail & Related papers (2020-06-26T17:15:16Z) - Improving Graph Neural Network Expressivity via Subgraph Isomorphism
Counting [63.04999833264299]
"Graph Substructure Networks" (GSN) is a topologically-aware message passing scheme based on substructure encoding.
We show that it is strictly more expressive than the Weisfeiler-Leman (WL) graph isomorphism test.
We perform an extensive evaluation on graph classification and regression tasks and obtain state-of-the-art results in diverse real-world settings.
arXiv Detail & Related papers (2020-06-16T15:30:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.