Theoretically Improving Graph Neural Networks via Anonymous Walk Graph
Kernels
- URL: http://arxiv.org/abs/2104.02995v1
- Date: Wed, 7 Apr 2021 08:50:34 GMT
- Title: Theoretically Improving Graph Neural Networks via Anonymous Walk Graph
Kernels
- Authors: Qingqing Long, Yilun Jin, Yi Wu, Guojie Song
- Abstract summary: Graph neural networks (GNNs) have achieved tremendous success in graph mining.
MPGNNs, as the prevailing type of GNNs, have been theoretically shown unable to distinguish, detect or count many graph substructures.
We propose GSKN, a GNN model with a theoretically stronger ability to distinguish graph structures.
- Score: 25.736529232578178
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) have achieved tremendous success in graph
mining. However, the inability of GNNs to model substructures in graphs remains
a significant drawback. Specifically, message-passing GNNs (MPGNNs), as the
prevailing type of GNNs, have been theoretically shown unable to distinguish,
detect or count many graph substructures. While efforts have been paid to
complement the inability, existing works either rely on pre-defined
substructure sets, thus being less flexible, or are lacking in theoretical
insights. In this paper, we propose GSKN, a GNN model with a theoretically
stronger ability to distinguish graph structures. Specifically, we design GSKN
based on anonymous walks (AWs), flexible substructure units, and derive it upon
feature mappings of graph kernels (GKs). We theoretically show that GSKN
provably extends the 1-WL test, and hence the maximally powerful MPGNNs from
both graph-level and node-level viewpoints. Correspondingly, various
experiments are leveraged to evaluate GSKN, where GSKN outperforms a wide range
of baselines, endorsing the analysis.
Related papers
- A Manifold Perspective on the Statistical Generalization of Graph Neural Networks [84.01980526069075]
We take a manifold perspective to establish the statistical generalization theory of GNNs on graphs sampled from a manifold in the spectral domain.
We prove that the generalization bounds of GNNs decrease linearly with the size of the graphs in the logarithmic scale, and increase linearly with the spectral continuity constants of the filter functions.
arXiv Detail & Related papers (2024-06-07T19:25:02Z) - Graph Neural Networks with Local Graph Parameters [1.8600631687568656]
Local graph parameters can be added to any Graph Neural Networks (GNNs) architecture.
Our results connect GNNs with deep results in finite model theory and finite variable logics.
arXiv Detail & Related papers (2021-06-12T07:43:51Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Graph Neural Networks: Architectures, Stability and Transferability [176.3960927323358]
Graph Neural Networks (GNNs) are information processing architectures for signals supported on graphs.
They are generalizations of convolutional neural networks (CNNs) in which individual layers contain banks of graph convolutional filters.
arXiv Detail & Related papers (2020-08-04T18:57:36Z) - Expressive Power of Invariant and Equivariant Graph Neural Networks [10.419350129060598]
We show that Folklore Graph Neural Networks (FGNN) are the most expressive architectures proposed so far for a given tensor order.
FGNNs are able to learn how to solve the problem, leading to much better average performances than existing algorithms.
arXiv Detail & Related papers (2020-06-28T16:35:45Z) - Improving Graph Neural Network Expressivity via Subgraph Isomorphism
Counting [63.04999833264299]
"Graph Substructure Networks" (GSN) is a topologically-aware message passing scheme based on substructure encoding.
We show that it is strictly more expressive than the Weisfeiler-Leman (WL) graph isomorphism test.
We perform an extensive evaluation on graph classification and regression tasks and obtain state-of-the-art results in diverse real-world settings.
arXiv Detail & Related papers (2020-06-16T15:30:31Z) - Eigen-GNN: A Graph Structure Preserving Plug-in for GNNs [95.63153473559865]
Graph Neural Networks (GNNs) are emerging machine learning models on graphs.
Most existing GNN models in practice are shallow and essentially feature-centric.
We show empirically and analytically that the existing shallow GNNs cannot preserve graph structures well.
We propose Eigen-GNN, a plug-in module to boost GNNs ability in preserving graph structures.
arXiv Detail & Related papers (2020-06-08T02:47:38Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.