Deep Graph Neural Networks via Flexible Subgraph Aggregation
- URL: http://arxiv.org/abs/2305.05368v2
- Date: Tue, 30 May 2023 10:17:42 GMT
- Title: Deep Graph Neural Networks via Flexible Subgraph Aggregation
- Authors: Jingbo Zhou, Yixuan Du, Ruqiong Zhang, Di Jin, Carl Yang, Rui Zhang
- Abstract summary: Graph neural networks (GNNs) can learn from graph-structured data and learn the representation of nodes through aggregating neighborhood information.
In this paper, we evaluate the expressive power of GNNs from the perspective of subgraph aggregation.
We propose a sampling-based node-level residual module (SNR) that can achieve a more flexible utilization of different hops of subgraph aggregation.
- Score: 50.034313206471694
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs), a type of neural network that can learn from
graph-structured data and learn the representation of nodes through aggregating
neighborhood information, have shown superior performance in various downstream
tasks. However, it is known that the performance of GNNs degrades gradually as
the number of layers increases. In this paper, we evaluate the expressive power
of GNNs from the perspective of subgraph aggregation. We reveal the potential
cause of performance degradation for traditional deep GNNs, i.e., aggregated
subgraph overlap, and we theoretically illustrate the fact that previous
residual-based GNNs exploit the aggregation results of 1 to $k$ hop subgraphs
to improve the effectiveness. Further, we find that the utilization of
different subgraphs by previous models is often inflexible. Based on this, we
propose a sampling-based node-level residual module (SNR) that can achieve a
more flexible utilization of different hops of subgraph aggregation by
introducing node-level parameters sampled from a learnable distribution.
Extensive experiments show that the performance of GNNs with our proposed SNR
module outperform a comprehensive set of baselines.
Related papers
- Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - Degree-based stratification of nodes in Graph Neural Networks [66.17149106033126]
We modify the Graph Neural Network (GNN) architecture so that the weight matrices are learned, separately, for the nodes in each group.
This simple-to-implement modification seems to improve performance across datasets and GNN methods.
arXiv Detail & Related papers (2023-12-16T14:09:23Z) - AGNN: Alternating Graph-Regularized Neural Networks to Alleviate
Over-Smoothing [29.618952407794776]
We propose an Alternating Graph-regularized Neural Network (AGNN) composed of Graph Convolutional Layer (GCL) and Graph Embedding Layer (GEL)
GEL is derived from the graph-regularized optimization containing Laplacian embedding term, which can alleviate the over-smoothing problem.
AGNN is evaluated via a large number of experiments including performance comparison with some multi-layer or multi-order graph neural networks.
arXiv Detail & Related papers (2023-04-14T09:20:03Z) - On Over-Squashing in Message Passing Neural Networks: The Impact of
Width, Depth, and Topology [4.809459273366461]
Message Passing Neural Networks (MPNNs) are instances of Graph Neural Networks that leverage the graph to send messages over the edges.
This inductive bias leads to a phenomenon known as over-squashing, where a node feature is insensitive to information contained at distant nodes.
Despite recent methods introduced to mitigate this issue, an understanding of the causes for over-squashing and of possible solutions are lacking.
arXiv Detail & Related papers (2023-02-06T17:16:42Z) - ResNorm: Tackling Long-tailed Degree Distribution Issue in Graph Neural
Networks via Normalization [80.90206641975375]
This paper focuses on improving the performance of GNNs via normalization.
By studying the long-tailed distribution of node degrees in the graph, we propose a novel normalization method for GNNs.
The $scale$ operation of ResNorm reshapes the node-wise standard deviation (NStd) distribution so as to improve the accuracy of tail nodes.
arXiv Detail & Related papers (2022-06-16T13:49:09Z) - Non-Gradient Manifold Neural Network [79.44066256794187]
Deep neural network (DNN) generally takes thousands of iterations to optimize via gradient descent.
We propose a novel manifold neural network based on non-gradient optimization.
arXiv Detail & Related papers (2021-06-15T06:39:13Z) - Non-Recursive Graph Convolutional Networks [33.459371861932574]
We propose a novel architecture named Non-Recursive Graph Convolutional Network (NRGCN) to improve both the training efficiency and the learning performance of GCNs.
NRGCN represents different hops of neighbors for each node based on inner-layer aggregation and layer-independent sampling.
In this way, each node can be directly represented by concatenating the information extracted independently from each hop of its neighbors.
arXiv Detail & Related papers (2021-05-09T08:12:18Z) - Overcoming Catastrophic Forgetting in Graph Neural Networks [50.900153089330175]
Catastrophic forgetting refers to the tendency that a neural network "forgets" the previous learned knowledge upon learning new tasks.
We propose a novel scheme dedicated to overcoming this problem and hence strengthen continual learning in graph neural networks (GNNs)
At the heart of our approach is a generic module, termed as topology-aware weight preserving(TWP)
arXiv Detail & Related papers (2020-12-10T22:30:25Z) - NCGNN: Node-level Capsule Graph Neural Network [45.23653314235767]
Node-level Capsule Graph Neural Network (NCGNN) represents nodes as groups of capsules.
novel dynamic routing procedure is developed to adaptively select appropriate capsules for aggregation.
NCGNN can well address the over-smoothing issue and outperforms the state of the arts by producing better node embeddings for classification.
arXiv Detail & Related papers (2020-12-07T06:46:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.