Eigen-GNN: A Graph Structure Preserving Plug-in for GNNs
- URL: http://arxiv.org/abs/2006.04330v1
- Date: Mon, 8 Jun 2020 02:47:38 GMT
- Title: Eigen-GNN: A Graph Structure Preserving Plug-in for GNNs
- Authors: Ziwei Zhang, Peng Cui, Jian Pei, Xin Wang, Wenwu Zhu
- Abstract summary: Graph Neural Networks (GNNs) are emerging machine learning models on graphs.
Most existing GNN models in practice are shallow and essentially feature-centric.
We show empirically and analytically that the existing shallow GNNs cannot preserve graph structures well.
We propose Eigen-GNN, a plug-in module to boost GNNs ability in preserving graph structures.
- Score: 95.63153473559865
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) are emerging machine learning models on graphs.
Although sufficiently deep GNNs are shown theoretically capable of fully
preserving graph structures, most existing GNN models in practice are shallow
and essentially feature-centric. We show empirically and analytically that the
existing shallow GNNs cannot preserve graph structures well. To overcome this
fundamental challenge, we propose Eigen-GNN, a simple yet effective and general
plug-in module to boost GNNs ability in preserving graph structures.
Specifically, we integrate the eigenspace of graph structures with GNNs by
treating GNNs as a type of dimensionality reduction and expanding the initial
dimensionality reduction bases. Without needing to increase depths, Eigen-GNN
possesses more flexibilities in handling both feature-driven and
structure-driven tasks since the initial bases contain both node features and
graph structures. We present extensive experimental results to demonstrate the
effectiveness of Eigen-GNN for tasks including node classification, link
prediction, and graph isomorphism tests.
Related papers
- A Manifold Perspective on the Statistical Generalization of Graph Neural Networks [84.01980526069075]
Graph Neural Networks (GNNs) combine information from adjacent nodes by successive applications of graph convolutions.
We study the generalization gaps of GNNs on both node-level and graph-level tasks.
We show that the generalization gaps decrease with the number of nodes in the training graphs.
arXiv Detail & Related papers (2024-06-07T19:25:02Z) - GNN-Ensemble: Towards Random Decision Graph Neural Networks [3.7620848582312405]
Graph Neural Networks (GNNs) have enjoyed wide spread applications in graph-structured data.
GNNs are required to learn latent patterns from a limited amount of training data to perform inferences on a vast amount of test data.
In this paper, we push one step forward on the ensemble learning of GNNs with improved accuracy, robustness, and adversarial attacks.
arXiv Detail & Related papers (2023-03-20T18:24:01Z) - Graph Neural Networks can Recover the Hidden Features Solely from the Graph Structure [17.912507269030577]
Graph Neural Networks (GNNs) are popular models for graph learning problems.
We show that GNNs can fully exploit the graph structure by themselves.
In effect, GNNs can use both the hidden and explicit node features for downstream tasks.
arXiv Detail & Related papers (2023-01-26T06:28:41Z) - Ego-GNNs: Exploiting Ego Structures in Graph Neural Networks [12.97622530614215]
We show that Ego-GNNs are capable of recognizing closed triangles, which is essential given the prominence of transitivity in real-world graphs.
In particular, we show that Ego-GNNs are capable of recognizing closed triangles, which is essential given the prominence of transitivity in real-world graphs.
arXiv Detail & Related papers (2021-07-22T23:42:23Z) - Theoretically Improving Graph Neural Networks via Anonymous Walk Graph
Kernels [25.736529232578178]
Graph neural networks (GNNs) have achieved tremendous success in graph mining.
MPGNNs, as the prevailing type of GNNs, have been theoretically shown unable to distinguish, detect or count many graph substructures.
We propose GSKN, a GNN model with a theoretically stronger ability to distinguish graph structures.
arXiv Detail & Related papers (2021-04-07T08:50:34Z) - A Unified Lottery Ticket Hypothesis for Graph Neural Networks [82.31087406264437]
We present a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights.
We further generalize the popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network.
arXiv Detail & Related papers (2021-02-12T21:52:43Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - The Surprising Power of Graph Neural Networks with Random Node
Initialization [54.4101931234922]
Graph neural networks (GNNs) are effective models for representation learning on relational data.
Standard GNNs are limited in their expressive power, as they cannot distinguish beyond the capability of the Weisfeiler-Leman graph isomorphism.
In this work, we analyze the expressive power of GNNs with random node (RNI)
We prove that these models are universal, a first such result for GNNs not relying on computationally demanding higher-order properties.
arXiv Detail & Related papers (2020-10-02T19:53:05Z) - GPT-GNN: Generative Pre-Training of Graph Neural Networks [93.35945182085948]
Graph neural networks (GNNs) have been demonstrated to be powerful in modeling graph-structured data.
We present the GPT-GNN framework to initialize GNNs by generative pre-training.
We show that GPT-GNN significantly outperforms state-of-the-art GNN models without pre-training by up to 9.1% across various downstream tasks.
arXiv Detail & Related papers (2020-06-27T20:12:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.