Learning Graph Normalization for Graph Neural Networks
- URL: http://arxiv.org/abs/2009.11746v1
- Date: Thu, 24 Sep 2020 15:16:43 GMT
- Title: Learning Graph Normalization for Graph Neural Networks
- Authors: Yihao Chen, Xin Tang, Xianbiao Qi, Chun-Guang Li, Rong Xiao
- Abstract summary: Graph Neural Networks (GNNs) have attracted considerable attention and have emerged as a new promising paradigm to process graph-structured data.
To train a GNN with multiple layers effectively, some normalization techniques (e.g., node-wise normalization, batch-wise normalization) are necessary.
We learn graph normalization by optimizing a weighted combination of normalization techniques at four different levels.
- Score: 9.481176167555164
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have attracted considerable attention and have
emerged as a new promising paradigm to process graph-structured data. GNNs are
usually stacked to multiple layers and the node representations in each layer
are computed through propagating and aggregating the neighboring node features
with respect to the graph. By stacking to multiple layers, GNNs are able to
capture the long-range dependencies among the data on the graph and thus bring
performance improvements. To train a GNN with multiple layers effectively, some
normalization techniques (e.g., node-wise normalization, batch-wise
normalization) are necessary. However, the normalization techniques for GNNs
are highly task-relevant and different application tasks prefer to different
normalization techniques, which is hard to know in advance. To tackle this
deficiency, in this paper, we propose to learn graph normalization by
optimizing a weighted combination of normalization techniques at four different
levels, including node-wise normalization, adjacency-wise normalization,
graph-wise normalization, and batch-wise normalization, in which the
adjacency-wise normalization and the graph-wise normalization are newly
proposed in this paper to take into account the local structure and the global
structure on the graph, respectively. By learning the optimal weights, we are
able to automatically select a single best or a best combination of multiple
normalizations for a specific task. We conduct extensive experiments on
benchmark datasets for different tasks, including node classification, link
prediction, graph classification and graph regression, and confirm that the
learned graph normalization leads to competitive results and that the learned
weights suggest the appropriate normalization techniques for the specific task.
Source code is released here https://github.com/cyh1112/GraphNormalization.
Related papers
- DA-MoE: Addressing Depth-Sensitivity in Graph-Level Analysis through Mixture of Experts [70.21017141742763]
Graph neural networks (GNNs) are gaining popularity for processing graph-structured data.
Existing methods generally use a fixed number of GNN layers to generate representations for all graphs.
We propose the depth adaptive mixture of expert (DA-MoE) method, which incorporates two main improvements to GNN.
arXiv Detail & Related papers (2024-11-05T11:46:27Z) - Spectral Greedy Coresets for Graph Neural Networks [61.24300262316091]
The ubiquity of large-scale graphs in node-classification tasks hinders the real-world applications of Graph Neural Networks (GNNs)
This paper studies graph coresets for GNNs and avoids the interdependence issue by selecting ego-graphs based on their spectral embeddings.
Our spectral greedy graph coreset (SGGC) scales to graphs with millions of nodes, obviates the need for model pre-training, and applies to low-homophily graphs.
arXiv Detail & Related papers (2024-05-27T17:52:12Z) - GRANOLA: Adaptive Normalization for Graph Neural Networks [28.993479890213617]
We propose a graph-adaptive normalization layer, GRANOLA, for Graph Neural Network (GNN) layers.
Unlike existing normalization layers, GRANOLA normalizes node features by adapting to the specific characteristics of the graph.
Our empirical evaluation of various graph benchmarks underscores the superior performance of GRANOLA over existing normalization techniques.
arXiv Detail & Related papers (2024-04-20T10:44:13Z) - T-GAE: Transferable Graph Autoencoder for Network Alignment [79.89704126746204]
T-GAE is a graph autoencoder framework that leverages transferability and stability of GNNs to achieve efficient network alignment without retraining.
Our experiments demonstrate that T-GAE outperforms the state-of-the-art optimization method and the best GNN approach by up to 38.7% and 50.8%, respectively.
arXiv Detail & Related papers (2023-10-05T02:58:29Z) - Graph Mixture of Experts: Learning on Large-Scale Graphs with Explicit
Diversity Modeling [60.0185734837814]
Graph neural networks (GNNs) have found extensive applications in learning from graph data.
To bolster the generalization capacity of GNNs, it has become customary to augment training graph structures with techniques like graph augmentations.
This study introduces the concept of Mixture-of-Experts (MoE) to GNNs, with the aim of augmenting their capacity to adapt to a diverse range of training graph structures.
arXiv Detail & Related papers (2023-04-06T01:09:36Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Adaptive Kernel Graph Neural Network [21.863238974404474]
Graph neural networks (GNNs) have demonstrated great success in representation learning for graph-structured data.
In this paper, we propose a novel framework - i.e., namely Adaptive Kernel Graph Neural Network (AKGNN)
AKGNN learns to adapt to the optimal graph kernel in a unified manner at the first attempt.
Experiments are conducted on acknowledged benchmark datasets and promising results demonstrate the outstanding performance of our proposed AKGNN.
arXiv Detail & Related papers (2021-12-08T20:23:58Z) - GraphNorm: A Principled Approach to Accelerating Graph Neural Network
Training [101.3819906739515]
We study what normalization is effective for Graph Neural Networks (GNNs)
Faster convergence is achieved with InstanceNorm compared to BatchNorm and LayerNorm.
GraphNorm also improves the generalization of GNNs, achieving better performance on graph classification benchmarks.
arXiv Detail & Related papers (2020-09-07T17:55:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.