Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning
- URL: http://arxiv.org/abs/2009.00142v4
- Date: Thu, 29 Oct 2020 17:11:07 GMT
- Title: Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning
- Authors: Pan Li, Yanbang Wang, Hongwei Wang, Jure Leskovec
- Abstract summary: Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
- Score: 63.97983530843762
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning representations of sets of nodes in a graph is crucial for
applications ranging from node-role discovery to link prediction and molecule
classification. Graph Neural Networks (GNNs) have achieved great success in
graph representation learning. However, expressive power of GNNs is limited by
the 1-Weisfeiler-Lehman (WL) test and thus GNNs generate identical
representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order-WL tests, only
focus on representing entire graphs and they are computationally inefficient as
they cannot utilize sparsity of the underlying graph. Here we propose and
mathematically analyze a general class of structure-related features, termed
Distance Encoding (DE). DE assists GNNs in representing any set of nodes, while
providing strictly more expressive power than the 1-WL test. DE captures the
distance between the node set whose representation is to be learned and each
node in the graph. To capture the distance DE can apply various graph-distance
measures such as shortest path distance or generalized PageRank scores. We
propose two ways for GNNs to use DEs (1) as extra node features, and (2) as
controllers of message aggregation in GNNs. Both approaches can utilize the
sparse structure of the underlying graph, which leads to computational
efficiency and scalability. We also prove that DE can distinguish node sets
embedded in almost all regular graphs where traditional GNNs always fail. We
evaluate DE on three tasks over six real networks: structural role prediction,
link prediction, and triangle prediction. Results show that our models
outperform GNNs without DE by up-to 15\% in accuracy and AUROC. Furthermore,
our models also significantly outperform other state-of-the-art methods
especially designed for the above tasks.
Related papers
- Transferability of Graph Neural Networks using Graphon and Sampling Theories [0.0]
Graph neural networks (GNNs) have become powerful tools for processing graph-based information in various domains.
A desirable property of GNNs is transferability, where a trained network can swap in information from a different graph without retraining and retain its accuracy.
We contribute to the application of graphons to GNNs by presenting an explicit two-layer graphon neural network (WNN) architecture.
arXiv Detail & Related papers (2023-07-25T02:11:41Z) - Seq-HGNN: Learning Sequential Node Representation on Heterogeneous Graph [57.2953563124339]
We propose a novel heterogeneous graph neural network with sequential node representation, namely Seq-HGNN.
We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) and Open Graph Benchmark (OGB)
arXiv Detail & Related papers (2023-05-18T07:27:18Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - Geodesic Graph Neural Network for Efficient Graph Representation
Learning [34.047527874184134]
We propose an efficient GNN framework called Geodesic GNN (GDGNN)
It injects conditional relationships between nodes into the model without labeling.
Conditioned on the geodesic representations, GDGNN is able to generate node, link, and graph representations that carry much richer structural information than plain GNNs.
arXiv Detail & Related papers (2022-10-06T02:02:35Z) - On the approximation capability of GNNs in node
classification/regression tasks [4.141514895639094]
Graph Neural Networks (GNNs) are a broad class of connectionist models for graph processing.
We show that GNNs are universal approximators in probability for node classification/regression tasks.
arXiv Detail & Related papers (2021-06-16T17:46:51Z) - Graph Neural Networks with Local Graph Parameters [1.8600631687568656]
Local graph parameters can be added to any Graph Neural Networks (GNNs) architecture.
Our results connect GNNs with deep results in finite model theory and finite variable logics.
arXiv Detail & Related papers (2021-06-12T07:43:51Z) - A Unified Lottery Ticket Hypothesis for Graph Neural Networks [82.31087406264437]
We present a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights.
We further generalize the popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network.
arXiv Detail & Related papers (2021-02-12T21:52:43Z) - Identity-aware Graph Neural Networks [63.6952975763946]
We develop a class of message passing Graph Neural Networks (ID-GNNs) with greater expressive power than the 1-WL test.
ID-GNN extends existing GNN architectures by inductively considering nodes' identities during message passing.
We show that transforming existing GNNs to ID-GNNs yields on average 40% accuracy improvement on challenging node, edge, and graph property prediction tasks.
arXiv Detail & Related papers (2021-01-25T18:59:01Z) - Expressive Power of Invariant and Equivariant Graph Neural Networks [10.419350129060598]
We show that Folklore Graph Neural Networks (FGNN) are the most expressive architectures proposed so far for a given tensor order.
FGNNs are able to learn how to solve the problem, leading to much better average performances than existing algorithms.
arXiv Detail & Related papers (2020-06-28T16:35:45Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.