Better Not to Propagate: Understanding Edge Uncertainty and Over-smoothing in Signed Graph Neural Networks
- URL: http://arxiv.org/abs/2408.04895v3
- Date: Sat, 2 Nov 2024 06:10:08 GMT
- Title: Better Not to Propagate: Understanding Edge Uncertainty and Over-smoothing in Signed Graph Neural Networks
- Authors: Yoonhyuk Choi, Jiho Choi, Taewook Ko, Chong-Kwon Kim,
- Abstract summary: We propose a novel method for estimating homophily and edge error ratio, integrated with dynamic selection between blocked and signed propagation during training.
Our theoretical analysis, supported by extensive experiments, demonstrates that blocking MP can be more effective than signed propagation under high edge error ratios.
- Score: 3.4498722449655066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional Graph Neural Networks (GNNs) rely on network homophily, which can lead to performance degradation due to over-smoothing in many real-world heterophily scenarios. Recent studies analyze the smoothing effect (separability) after message-passing (MP), depending on the expectation of node features. Regarding separability gain, they provided theoretical backgrounds on over-smoothing caused by various propagation schemes, including positive, signed, and blocked MPs. More recently, by extending these theorems, some works have suggested improvements in signed propagation under multiple classes. However, prior works assume that the error ratio of all propagation schemes is fixed, failing to investigate this phenomenon correctly. To solve this problem, we propose a novel method for estimating homophily and edge error ratio, integrated with dynamic selection between blocked and signed propagation during training. Our theoretical analysis, supported by extensive experiments, demonstrates that blocking MP can be more effective than signed propagation under high edge error ratios, improving the performance in both homophilic and heterophilic graphs.
Related papers
- The Heterophilic Snowflake Hypothesis: Training and Empowering GNNs for Heterophilic Graphs [59.03660013787925]
We introduce the Heterophily Snowflake Hypothesis and provide an effective solution to guide and facilitate research on heterophilic graphs.
Our observations show that our framework acts as a versatile operator for diverse tasks.
It can be integrated into various GNN frameworks, boosting performance in-depth and offering an explainable approach to choosing the optimal network depth.
arXiv Detail & Related papers (2024-06-18T12:16:00Z) - Sign is Not a Remedy: Multiset-to-Multiset Message Passing for Learning on Heterophilic Graphs [77.42221150848535]
We propose a novel message passing function called Multiset to Multiset GNN(M2M-GNN)
Our theoretical analyses and extensive experiments demonstrate that M2M-GNN effectively alleviates the aforementioned limitations of SMP, yielding superior performance in comparison.
arXiv Detail & Related papers (2024-05-31T07:39:22Z) - Heterophilous Distribution Propagation for Graph Neural Networks [23.897535976924722]
We propose heterophilous distribution propagation (HDP) for graph neural networks.
Instead of aggregating information from all neighborhoods, HDP adaptively separates the neighbors into homophilous and heterphilous parts.
We conduct extensive experiments on 9 benchmark datasets with different levels of homophily.
arXiv Detail & Related papers (2024-05-31T06:40:56Z) - Generation is better than Modification: Combating High Class Homophily Variance in Graph Anomaly Detection [51.11833609431406]
Homophily distribution differences between different classes are significantly greater than those in homophilic and heterophilic graphs.
We introduce a new metric called Class Homophily Variance, which quantitatively describes this phenomenon.
To mitigate its impact, we propose a novel GNN model named Homophily Edge Generation Graph Neural Network (HedGe)
arXiv Detail & Related papers (2024-03-15T14:26:53Z) - Improving Signed Propagation for Graph Neural Networks in Multi-Class Environments [3.4498722449655066]
We introduce two novel strategies for improving signed propagation under multi-class graphs.
The proposed scheme combines calibration to secure robustness while reducing uncertainty.
We show the efficacy of our theorem through extensive experiments on six benchmark graph datasets.
arXiv Detail & Related papers (2023-01-21T08:47:22Z) - A Non-Asymptotic Analysis of Oversmoothing in Graph Neural Networks [33.35609077417775]
We characterize the mechanism behind the phenomenon via a non-asymptotic analysis.
We show that oversmoothing happens once the mixing effect starts to dominate the denoising effect.
Our results suggest that while PPR mitigates oversmoothing at deeper layers, PPR-based architectures still achieve their best performance at a shallow depth.
arXiv Detail & Related papers (2022-12-21T00:33:59Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Implicit vs Unfolded Graph Neural Networks [18.084842625063082]
Graph neural networks (GNNs) sometimes struggle to maintain a healthy balance between modeling long-range dependencies and avoiding unintended consequences.
Two separate strategies have recently been proposed, namely implicit and unfolded GNNs.
We provide empirical head-to-head comparisons across a variety of synthetic and public real-world benchmarks.
arXiv Detail & Related papers (2021-11-12T07:49:16Z) - Is Homophily a Necessity for Graph Neural Networks? [50.959340355849896]
Graph neural networks (GNNs) have shown great prowess in learning representations suitable for numerous graph-based machine learning tasks.
GNNs are widely believed to work well due to the homophily assumption ("like attracts like"), and fail to generalize to heterophilous graphs where dissimilar nodes connect.
Recent works design new architectures to overcome such heterophily-related limitations, citing poor baseline performance and new architecture improvements on a few heterophilous graph benchmark datasets as evidence for this notion.
In our experiments, we empirically find that standard graph convolutional networks (GCNs) can actually achieve better performance than
arXiv Detail & Related papers (2021-06-11T02:44:00Z) - Optimization and Generalization Analysis of Transduction through
Gradient Boosting and Application to Multi-scale Graph Neural Networks [60.22494363676747]
It is known that the current graph neural networks (GNNs) are difficult to make themselves deep due to the problem known as over-smoothing.
Multi-scale GNNs are a promising approach for mitigating the over-smoothing problem.
We derive the optimization and generalization guarantees of transductive learning algorithms that include multi-scale GNNs.
arXiv Detail & Related papers (2020-06-15T17:06:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.