ResNorm: Tackling Long-tailed Degree Distribution Issue in Graph Neural
Networks via Normalization
- URL: http://arxiv.org/abs/2206.08181v2
- Date: Mon, 4 Sep 2023 10:11:12 GMT
- Title: ResNorm: Tackling Long-tailed Degree Distribution Issue in Graph Neural
Networks via Normalization
- Authors: Langzhang Liang, Zenglin Xu, Zixing Song, Irwin King, Yuan Qi, Jieping
Ye
- Abstract summary: This paper focuses on improving the performance of GNNs via normalization.
By studying the long-tailed distribution of node degrees in the graph, we propose a novel normalization method for GNNs.
The $scale$ operation of ResNorm reshapes the node-wise standard deviation (NStd) distribution so as to improve the accuracy of tail nodes.
- Score: 80.90206641975375
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have attracted much attention due to their
ability in learning representations from graph-structured data. Despite the
successful applications of GNNs in many domains, the optimization of GNNs is
less well studied, and the performance on node classification heavily suffers
from the long-tailed node degree distribution. This paper focuses on improving
the performance of GNNs via normalization.
In detail, by studying the long-tailed distribution of node degrees in the
graph, we propose a novel normalization method for GNNs, which is termed
ResNorm (\textbf{Res}haping the long-tailed distribution into a normal-like
distribution via \textbf{norm}alization). The $scale$ operation of ResNorm
reshapes the node-wise standard deviation (NStd) distribution so as to improve
the accuracy of tail nodes (\textit{i}.\textit{e}., low-degree nodes). We
provide a theoretical interpretation and empirical evidence for understanding
the mechanism of the above $scale$. In addition to the long-tailed distribution
issue, over-smoothing is also a fundamental issue plaguing the community. To
this end, we analyze the behavior of the standard shift and prove that the
standard shift serves as a preconditioner on the weight matrix, increasing the
risk of over-smoothing. With the over-smoothing issue in mind, we design a
$shift$ operation for ResNorm that simulates the degree-specific parameter
strategy in a low-cost manner. Extensive experiments have validated the
effectiveness of ResNorm on several node classification benchmark datasets.
Related papers
- Mitigating Degree Bias in Signed Graph Neural Networks [5.042342963087923]
Signed Graph Neural Networks (SGNNs) are up against fairness issues from source data and typical aggregation method.
In this paper, we are pioneering to make the investigation of fairness in SGNNs expanded from GNNs.
We identify the issue of degree bias within signed graphs, offering a new perspective on the fairness issues related to SGNNs.
arXiv Detail & Related papers (2024-08-16T03:22:18Z) - Degree-based stratification of nodes in Graph Neural Networks [66.17149106033126]
We modify the Graph Neural Network (GNN) architecture so that the weight matrices are learned, separately, for the nodes in each group.
This simple-to-implement modification seems to improve performance across datasets and GNN methods.
arXiv Detail & Related papers (2023-12-16T14:09:23Z) - Implicit Graph Neural Diffusion Networks: Convergence, Generalization,
and Over-Smoothing [7.984586585987328]
Implicit Graph Neural Networks (GNNs) have achieved significant success in addressing graph learning problems.
We introduce a geometric framework for designing implicit graph diffusion layers based on a parameterized graph Laplacian operator.
We show how implicit GNN layers can be viewed as the fixed-point equation of a Dirichlet energy minimization problem.
arXiv Detail & Related papers (2023-08-07T05:22:33Z) - Improving Expressivity of GNNs with Subgraph-specific Factor Embedded
Normalization [30.86182962089487]
Graph Neural Networks (GNNs) have emerged as a powerful category of learning architecture for handling graph-structured data.
We propose a dedicated plug-and-play normalization scheme, termed as SUbgraph-sPEcific FactoR Embedded Normalization (SuperNorm)
arXiv Detail & Related papers (2023-05-31T14:37:31Z) - OrthoReg: Improving Graph-regularized MLPs via Orthogonality
Regularization [66.30021126251725]
Graph Neural Networks (GNNs) are currently dominating in modeling graphstructure data.
Graph-regularized networks (GR-MLPs) implicitly inject the graph structure information into model weights, while their performance can hardly match that of GNNs in most tasks.
We show that GR-MLPs suffer from dimensional collapse, a phenomenon in which the largest a few eigenvalues dominate the embedding space.
We propose OrthoReg, a novel GR-MLP model to mitigate the dimensional collapse issue.
arXiv Detail & Related papers (2023-01-31T21:20:48Z) - RawlsGCN: Towards Rawlsian Difference Principle on Graph Convolutional
Network [102.27090022283208]
Graph Convolutional Network (GCN) plays pivotal roles in many real-world applications.
GCN often exhibits performance disparity with respect to node degrees, resulting in worse predictive accuracy for low-degree nodes.
We formulate the problem of mitigating the degree-related performance disparity in GCN from the perspective of the Rawlsian difference principle.
arXiv Detail & Related papers (2022-02-28T05:07:57Z) - GraphNorm: A Principled Approach to Accelerating Graph Neural Network
Training [101.3819906739515]
We study what normalization is effective for Graph Neural Networks (GNNs)
Faster convergence is achieved with InstanceNorm compared to BatchNorm and LayerNorm.
GraphNorm also improves the generalization of GNNs, achieving better performance on graph classification benchmarks.
arXiv Detail & Related papers (2020-09-07T17:55:21Z) - Understanding and Resolving Performance Degradation in Graph
Convolutional Networks [105.14867349802898]
Graph Convolutional Network (GCN) stacks several layers and in each layer performs a PROPagation operation (PROP) and a TRANsformation operation (TRAN) for learning node representations over graph-structured data.
GCNs tend to suffer performance drop when the model gets deep.
We study performance degradation of GCNs by experimentally examining how stacking only TRANs or PROPs works.
arXiv Detail & Related papers (2020-06-12T12:12:12Z) - Graph Random Neural Network for Semi-Supervised Learning on Graphs [36.218650686748546]
We study the problem of semi-supervised learning on graphs, for which graph neural networks (GNNs) have been extensively explored.
Most existing GNNs inherently suffer from the limitations of over-smoothing, non-robustness, and weak-generalization when labeled nodes are scarce.
In this paper, we propose a simple yet effective framework -- GRAPH R NEURAL NETWORKS (GRAND) -- to address these issues.
arXiv Detail & Related papers (2020-05-22T09:40:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.