From Stars to Subgraphs: Uplifting Any GNN with Local Structure
Awareness
- URL: http://arxiv.org/abs/2110.03753v1
- Date: Thu, 7 Oct 2021 19:08:08 GMT
- Title: From Stars to Subgraphs: Uplifting Any GNN with Local Structure
Awareness
- Authors: Lingxiao Zhao, Wei Jin, Leman Akoglu, Neil Shah
- Abstract summary: We introduce a general framework to uplift any MPNN to be more expressive.
Our framework is strictly more powerful than 1&2-WL, and is not less powerful than 3-WL.
Our method sets new state-of-the-art performance by large margins for several well-known graph ML tasks.
- Score: 23.279464786779787
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Message Passing Neural Networks (MPNNs) are a common type of Graph Neural
Network (GNN), in which each node's representation is computed recursively by
aggregating representations (messages) from its immediate neighbors akin to a
star-shaped pattern. MPNNs are appealing for being efficient and scalable,
how-ever their expressiveness is upper-bounded by the 1st-order
Weisfeiler-Lehman isomorphism test (1-WL). In response, prior works propose
highly expressive models at the cost of scalability and sometimes
generalization performance. Our work stands between these two regimes: we
introduce a general framework to uplift any MPNN to be more expressive, with
limited scalability overhead and greatly improved practical performance. We
achieve this by extending local aggregation in MPNNs from star patterns to
general subgraph patterns (e.g.,k-egonets):in our framework, each node
representation is computed as the encoding of a surrounding induced subgraph
rather than encoding of immediate neighbors only (i.e. a star). We choose the
subgraph encoder to be a GNN (mainly MPNNs, considering scalability) to design
a general framework that serves as a wrapper to up-lift any GNN. We call our
proposed method GNN-AK(GNN As Kernel), as the framework resembles a
convolutional neural network by replacing the kernel with GNNs. Theoretically,
we show that our framework is strictly more powerful than 1&2-WL, and is not
less powerful than 3-WL. We also design subgraph sampling strategies which
greatly reduce memory footprint and improve speed while maintaining
performance. Our method sets new state-of-the-art performance by large margins
for several well-known graph ML tasks; specifically, 0.08 MAE on ZINC,74.79%
and 86.887% accuracy on CIFAR10 and PATTERN respectively.
Related papers
- MAG-GNN: Reinforcement Learning Boosted Graph Neural Network [68.60884768323739]
A particular line of work proposed subgraph GNNs that use subgraph information to improve GNNs' expressivity and achieved great success.
Such effectivity sacrifices the efficiency of GNNs by enumerating all possible subgraphs.
We propose Magnetic Graph Neural Network (MAG-GNN), a reinforcement learning (RL) boosted GNN, to solve the problem.
arXiv Detail & Related papers (2023-10-29T20:32:21Z) - Graph Ladling: Shockingly Simple Parallel GNN Training without
Intermediate Communication [100.51884192970499]
GNNs are a powerful family of neural networks for learning over graphs.
scaling GNNs either by deepening or widening suffers from prevalent issues of unhealthy gradients, over-smoothening, information squashing.
We propose not to deepen or widen current GNNs, but instead present a data-centric perspective of model soups tailored for GNNs.
arXiv Detail & Related papers (2023-06-18T03:33:46Z) - Union Subgraph Neural Networks [7.922920885565194]
We empower Graph Neural Networks (GNNs) by injecting neighbor-connectivity information extracted from a new type of substructure.
By infusing the encoded neighbor connectivities, we propose a novel model, namely Union Subgraph Neural Network (UnionSNN)
Experiments on 18 benchmarks of both graph-level and node-level tasks demonstrate that UnionSNN outperforms state-of-the-art baseline models.
arXiv Detail & Related papers (2023-05-25T05:52:43Z) - VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using
Vector Quantization [70.8567058758375]
VQ-GNN is a universal framework to scale up any convolution-based GNNs using Vector Quantization (VQ) without compromising the performance.
Our framework avoids the "neighbor explosion" problem of GNNs using quantized representations combined with a low-rank version of the graph convolution matrix.
arXiv Detail & Related papers (2021-10-27T11:48:50Z) - UniGNN: a Unified Framework for Graph and Hypergraph Neural Networks [8.777765815864367]
Hypergraph, an expressive structure with flexibility to model the higher-order correlations among entities, has recently attracted increasing attention from various research domains.
We propose UniGNN, a unified framework for interpreting the message passing process in graph and hypergraph neural networks.
Experiments have been conducted to demonstrate the effectiveness of UniGNN on multiple real-world datasets.
arXiv Detail & Related papers (2021-05-03T15:48:34Z) - Identity-aware Graph Neural Networks [63.6952975763946]
We develop a class of message passing Graph Neural Networks (ID-GNNs) with greater expressive power than the 1-WL test.
ID-GNN extends existing GNN architectures by inductively considering nodes' identities during message passing.
We show that transforming existing GNNs to ID-GNNs yields on average 40% accuracy improvement on challenging node, edge, and graph property prediction tasks.
arXiv Detail & Related papers (2021-01-25T18:59:01Z) - The Surprising Power of Graph Neural Networks with Random Node
Initialization [54.4101931234922]
Graph neural networks (GNNs) are effective models for representation learning on relational data.
Standard GNNs are limited in their expressive power, as they cannot distinguish beyond the capability of the Weisfeiler-Leman graph isomorphism.
In this work, we analyze the expressive power of GNNs with random node (RNI)
We prove that these models are universal, a first such result for GNNs not relying on computationally demanding higher-order properties.
arXiv Detail & Related papers (2020-10-02T19:53:05Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Expressive Power of Invariant and Equivariant Graph Neural Networks [10.419350129060598]
We show that Folklore Graph Neural Networks (FGNN) are the most expressive architectures proposed so far for a given tensor order.
FGNNs are able to learn how to solve the problem, leading to much better average performances than existing algorithms.
arXiv Detail & Related papers (2020-06-28T16:35:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.