The Oversmoothing Fallacy: A Misguided Narrative in GNN Research
- URL: http://arxiv.org/abs/2506.04653v1
- Date: Thu, 05 Jun 2025 05:49:12 GMT
- Title: The Oversmoothing Fallacy: A Misguided Narrative in GNN Research
- Authors: MoonJeong Park, Sunghyun Choi, Jaeseung Heo, Eunhyeok Park, Dongwoo Kim,
- Abstract summary: Oversmoothing has been recognized as a main obstacle to building deep Graph Neural Networks (GNNs)<n>This paper argues that the influence of oversmoothing has been overstated and advocates for a further exploration of deep GNN architectures.
- Score: 9.694010867775068
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Oversmoothing has been recognized as a main obstacle to building deep Graph Neural Networks (GNNs), limiting the performance. This position paper argues that the influence of oversmoothing has been overstated and advocates for a further exploration of deep GNN architectures. Given the three core operations of GNNs, aggregation, linear transformation, and non-linear activation, we show that prior studies have mistakenly confused oversmoothing with the vanishing gradient, caused by transformation and activation rather than aggregation. Our finding challenges prior beliefs about oversmoothing being unique to GNNs. Furthermore, we demonstrate that classical solutions such as skip connections and normalization enable the successful stacking of deep GNN layers without performance degradation. Our results clarify misconceptions about oversmoothing and shed new light on the potential of deep GNNs.
Related papers
- On Vanishing Gradients, Over-Smoothing, and Over-Squashing in GNNs: Bridging Recurrent and Graph Learning [15.409865070022951]
Graph Neural Networks (GNNs) are models that leverage the graph structure to transmit information between nodes.<n>We show that a simple state-space formulation of a GNN effectively alleviates over-smoothing and over-squashing at no extra trainable parameter cost.
arXiv Detail & Related papers (2025-02-15T14:43:41Z) - Spiking Graph Neural Network on Riemannian Manifolds [51.15400848660023]
Graph neural networks (GNNs) have become the dominant solution for learning on graphs.
Existing spiking GNNs consider graphs in Euclidean space, ignoring the structural geometry.
We present a Manifold-valued Spiking GNN (MSG)
MSG achieves superior performance to previous spiking GNNs and energy efficiency to conventional GNNs.
arXiv Detail & Related papers (2024-10-23T15:09:02Z) - Certified Defense on the Fairness of Graph Neural Networks [86.14235652889242]
Graph Neural Networks (GNNs) have emerged as a prominent graph learning model in various graph-based tasks.<n> malicious attackers could easily corrupt the fairness level of their predictions by adding perturbations to the input graph data.<n>We propose a principled framework named ELEGANT to study a novel problem of certifiable defense on the fairness level of GNNs.
arXiv Detail & Related papers (2023-11-05T20:29:40Z) - Feature Overcorrelation in Deep Graph Neural Networks: A New Perspective [44.96635754139024]
Oversmoothing has been identified as one of the key issues which limit the performance of deep GNNs.
We propose a new perspective to look at the performance degradation of deep GNNs, i.e., feature overcorrelation.
To reduce the feature correlation, we propose a general framework DeCorr which can encourage GNNs to encode less redundant information.
arXiv Detail & Related papers (2022-06-15T18:13:52Z) - Universal Deep GNNs: Rethinking Residual Connection in GNNs from a Path
Decomposition Perspective for Preventing the Over-smoothing [50.242926616772515]
Recent studies have shown that GNNs with residual connections only slightly slow down the degeneration.
In this paper, we investigate the forward and backward behavior of GNNs with residual connections from a novel path decomposition perspective.
We present a Universal Deep GNNs framework with cold-start adaptive residual connections (DRIVE) and feedforward modules.
arXiv Detail & Related papers (2022-05-30T14:19:45Z) - Addressing Over-Smoothing in Graph Neural Networks via Deep Supervision [13.180922099929765]
Deep graph neural networks (GNNs) suffer from over-smoothing when the number of layers increases.
We propose DSGNNs enhanced with deep supervision where representations learned at all layers are used for training.
We show that DSGNNs are resilient to over-smoothing and can outperform competitive benchmarks on node and graph property prediction problems.
arXiv Detail & Related papers (2022-02-25T06:05:55Z) - Optimization of Graph Neural Networks: Implicit Acceleration by Skip
Connections and More Depth [57.10183643449905]
Graph Neural Networks (GNNs) have been studied from the lens of expressive power and generalization.
We study the dynamics of GNNs by studying deep skip optimization.
Our results provide first theoretical support for the success of GNNs.
arXiv Detail & Related papers (2021-05-10T17:59:01Z) - Optimization and Generalization Analysis of Transduction through
Gradient Boosting and Application to Multi-scale Graph Neural Networks [60.22494363676747]
It is known that the current graph neural networks (GNNs) are difficult to make themselves deep due to the problem known as over-smoothing.
Multi-scale GNNs are a promising approach for mitigating the over-smoothing problem.
We derive the optimization and generalization guarantees of transductive learning algorithms that include multi-scale GNNs.
arXiv Detail & Related papers (2020-06-15T17:06:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.