Breaking the Expressive Bottlenecks of Graph Neural Networks
- URL: http://arxiv.org/abs/2012.07219v1
- Date: Mon, 14 Dec 2020 02:36:46 GMT
- Title: Breaking the Expressive Bottlenecks of Graph Neural Networks
- Authors: Mingqi Yang, Yanming Shen, Heng Qi, Baocai Yin
- Abstract summary: Recently, the Weisfeiler-Lehman (WL) graph isomorphism test was used to measure the expressiveness of graph neural networks (GNNs)
In this paper, we improve the expressiveness by exploring powerful aggregators.
- Score: 26.000304641965947
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, the Weisfeiler-Lehman (WL) graph isomorphism test was used to
measure the expressiveness of graph neural networks (GNNs), showing that the
neighborhood aggregation GNNs were at most as powerful as 1-WL test in
distinguishing graph structures. There were also improvements proposed in
analogy to $k$-WL test ($k>1$). However, the aggregators in these GNNs are far
from injective as required by the WL test, and suffer from weak distinguishing
strength, making it become expressive bottlenecks. In this paper, we improve
the expressiveness by exploring powerful aggregators. We reformulate
aggregation with the corresponding aggregation coefficient matrix, and then
systematically analyze the requirements of the aggregation coefficient matrix
for building more powerful aggregators and even injective aggregators. It can
also be viewed as the strategy for preserving the rank of hidden features, and
implies that basic aggregators correspond to a special case of low-rank
transformations. We also show the necessity of applying nonlinear units ahead
of aggregation, which is different from most aggregation-based GNNs. Based on
our theoretical analysis, we develop two GNN layers, ExpandingConv and
CombConv. Experimental results show that our models significantly boost
performance, especially for large and densely connected graphs.
Related papers
- Improving Expressivity of GNNs with Subgraph-specific Factor Embedded
Normalization [30.86182962089487]
Graph Neural Networks (GNNs) have emerged as a powerful category of learning architecture for handling graph-structured data.
We propose a dedicated plug-and-play normalization scheme, termed as SUbgraph-sPEcific FactoR Embedded Normalization (SuperNorm)
arXiv Detail & Related papers (2023-05-31T14:37:31Z) - Equivariant Polynomials for Graph Neural Networks [38.15983687193912]
Graph Networks (GNN) are inherently limited in their expressive power.
This paper introduces an alternative power hierarchy based on the ability of GNNs to calculate equivariants of certain degree.
These enhanced GNNs demonstrate state-of-the-art results in experiments across multiple graph learning benchmarks.
arXiv Detail & Related papers (2023-02-22T18:53:38Z) - Representation Power of Graph Neural Networks: Improved Expressivity via
Algebraic Analysis [124.97061497512804]
We show that standard Graph Neural Networks (GNNs) produce more discriminative representations than the Weisfeiler-Lehman (WL) algorithm.
We also show that simple convolutional architectures with white inputs, produce equivariant features that count the closed paths in the graph.
arXiv Detail & Related papers (2022-05-19T18:40:25Z) - A Theoretical Comparison of Graph Neural Network Extensions [10.482805367361818]
We study and compare different Graph Neural Network extensions that increase the expressive power of GNNs beyond the Weisfeiler-Leman test.
We show negative examples that are particularly challenging for each of the extensions, and we prove several claims about the ability of these extensions to count cliques and cycles in the graph.
arXiv Detail & Related papers (2022-01-30T17:21:24Z) - Graph Neural Networks with Parallel Neighborhood Aggregations for Graph
Classification [14.112444998191698]
We focus on graph classification using a graph neural network (GNN) model that precomputes the node features using a bank of neighborhood aggregation graph operators arranged in parallel.
These GNN models have a natural advantage of reduced training and inference time due to the precomputations.
We demonstrate via numerical experiments that the developed model achieves state-of-the-art performance on many diverse real-world datasets.
arXiv Detail & Related papers (2021-11-22T19:19:40Z) - Meta-Aggregator: Learning to Aggregate for 1-bit Graph Neural Networks [127.32203532517953]
We develop a vanilla 1-bit framework that binarizes both the GNN parameters and the graph features.
Despite the lightweight architecture, we observed that this vanilla framework suffered from insufficient discriminative power in distinguishing graph topologies.
This discovery motivates us to devise meta aggregators to improve the expressive power of vanilla binarized GNNs.
arXiv Detail & Related papers (2021-09-27T08:50:37Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - The Surprising Power of Graph Neural Networks with Random Node
Initialization [54.4101931234922]
Graph neural networks (GNNs) are effective models for representation learning on relational data.
Standard GNNs are limited in their expressive power, as they cannot distinguish beyond the capability of the Weisfeiler-Leman graph isomorphism.
In this work, we analyze the expressive power of GNNs with random node (RNI)
We prove that these models are universal, a first such result for GNNs not relying on computationally demanding higher-order properties.
arXiv Detail & Related papers (2020-10-02T19:53:05Z) - Improving Graph Neural Network Expressivity via Subgraph Isomorphism
Counting [63.04999833264299]
"Graph Substructure Networks" (GSN) is a topologically-aware message passing scheme based on substructure encoding.
We show that it is strictly more expressive than the Weisfeiler-Leman (WL) graph isomorphism test.
We perform an extensive evaluation on graph classification and regression tasks and obtain state-of-the-art results in diverse real-world settings.
arXiv Detail & Related papers (2020-06-16T15:30:31Z) - Eigen-GNN: A Graph Structure Preserving Plug-in for GNNs [95.63153473559865]
Graph Neural Networks (GNNs) are emerging machine learning models on graphs.
Most existing GNN models in practice are shallow and essentially feature-centric.
We show empirically and analytically that the existing shallow GNNs cannot preserve graph structures well.
We propose Eigen-GNN, a plug-in module to boost GNNs ability in preserving graph structures.
arXiv Detail & Related papers (2020-06-08T02:47:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.