Understanding the Message Passing in Graph Neural Networks via Power
Iteration Clustering
- URL: http://arxiv.org/abs/2006.00144v3
- Date: Mon, 11 Jan 2021 06:13:38 GMT
- Title: Understanding the Message Passing in Graph Neural Networks via Power
Iteration Clustering
- Authors: Xue Li and Yuanzhi Cheng
- Abstract summary: We study the mechanism of message passing in graph neural networks (GNNs)
We propose subspace power iteration clustering (SPIC) models that iteratively learn with only one aggregator.
Our findings push the boundaries of the theoretical understanding of neural networks.
- Score: 4.426835206454162
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The mechanism of message passing in graph neural networks (GNNs) is still
mysterious. Apart from convolutional neural networks, no theoretical origin for
GNNs has been proposed. To our surprise, message passing can be best understood
in terms of power iteration. By fully or partly removing activation functions
and layer weights of GNNs, we propose subspace power iteration clustering
(SPIC) models that iteratively learn with only one aggregator. Experiments show
that our models extend GNNs and enhance their capability to process random
featured networks. Moreover, we demonstrate the redundancy of some
state-of-the-art GNNs in design and define a lower limit for model evaluation
by a random aggregator of message passing. Our findings push the boundaries of
the theoretical understanding of neural networks.
Related papers
- Reducing Oversmoothing through Informed Weight Initialization in Graph Neural Networks [16.745718346575202]
We propose a new scheme (G-Init) that reduces oversmoothing, leading to very good results in node and graph classification tasks.
Our results indicate that the new method (G-Init) reduces oversmoothing in deep GNNs, facilitating their effective use.
arXiv Detail & Related papers (2024-10-31T11:21:20Z) - Bundle Neural Networks for message diffusion on graphs [10.018379001231356]
We show that Bundle Neural Networks (BuNNs) can approximate any feature transformation over nodes on any graphs given injective positional encodings.
We also prove that BuNNs can approximate any feature transformation over nodes on any family of graphs given injective positional encodings, resulting in universal node-level expressivity.
arXiv Detail & Related papers (2024-05-24T13:28:48Z) - Continuous Spiking Graph Neural Networks [43.28609498855841]
Continuous graph neural networks (CGNNs) have garnered significant attention due to their ability to generalize existing discrete graph neural networks (GNNs)
We introduce the high-order structure of COS-GNN, which utilizes the second-order ODE for spiking representation and continuous propagation.
We provide the theoretical proof that COS-GNN effectively mitigates the issues of exploding and vanishing gradients, enabling us to capture long-range dependencies between nodes.
arXiv Detail & Related papers (2024-04-02T12:36:40Z) - Understanding and Improving Deep Graph Neural Networks: A Probabilistic
Graphical Model Perspective [22.82625446308785]
We propose a novel view for understanding graph neural networks (GNNs)
In this work, we focus on deep GNNs and propose a novel view for understanding them.
We design a more powerful GNN: coupling graph neural network (CoGNet)
arXiv Detail & Related papers (2023-01-25T12:02:12Z) - Enhance Information Propagation for Graph Neural Network by
Heterogeneous Aggregations [7.3136594018091134]
Graph neural networks are emerging as continuation of deep learning success w.r.t. graph data.
We propose to enhance information propagation among GNN layers by combining heterogeneous aggregations.
We empirically validate the effectiveness of HAG-Net on a number of graph classification benchmarks.
arXiv Detail & Related papers (2021-02-08T08:57:56Z) - Overcoming Catastrophic Forgetting in Graph Neural Networks [50.900153089330175]
Catastrophic forgetting refers to the tendency that a neural network "forgets" the previous learned knowledge upon learning new tasks.
We propose a novel scheme dedicated to overcoming this problem and hence strengthen continual learning in graph neural networks (GNNs)
At the heart of our approach is a generic module, termed as topology-aware weight preserving(TWP)
arXiv Detail & Related papers (2020-12-10T22:30:25Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - The Surprising Power of Graph Neural Networks with Random Node
Initialization [54.4101931234922]
Graph neural networks (GNNs) are effective models for representation learning on relational data.
Standard GNNs are limited in their expressive power, as they cannot distinguish beyond the capability of the Weisfeiler-Leman graph isomorphism.
In this work, we analyze the expressive power of GNNs with random node (RNI)
We prove that these models are universal, a first such result for GNNs not relying on computationally demanding higher-order properties.
arXiv Detail & Related papers (2020-10-02T19:53:05Z) - Graph Neural Networks: Architectures, Stability and Transferability [176.3960927323358]
Graph Neural Networks (GNNs) are information processing architectures for signals supported on graphs.
They are generalizations of convolutional neural networks (CNNs) in which individual layers contain banks of graph convolutional filters.
arXiv Detail & Related papers (2020-08-04T18:57:36Z) - Graph Neural Networks for Motion Planning [108.51253840181677]
We present two techniques, GNNs over dense fixed graphs for low-dimensional problems and sampling-based GNNs for high-dimensional problems.
We examine the ability of a GNN to tackle planning problems such as identifying critical nodes or learning the sampling distribution in Rapidly-exploring Random Trees (RRT)
Experiments with critical sampling, a pendulum and a six DoF robot arm show GNNs improve on traditional analytic methods as well as learning approaches using fully-connected or convolutional neural networks.
arXiv Detail & Related papers (2020-06-11T08:19:06Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.