Oversmoothing as Loss of Sign: Towards Structural Balance in Graph Neural Networks
- URL: http://arxiv.org/abs/2502.11394v1
- Date: Mon, 17 Feb 2025 03:25:36 GMT
- Title: Oversmoothing as Loss of Sign: Towards Structural Balance in Graph Neural Networks
- Authors: Jiaqi Wang, Xinyi Wu, James Cheng, Yifei Wang,
- Abstract summary: Oversmoothing is a common issue in graph neural networks (GNNs)
Three major classes of anti-oversmoothing techniques can be mathematically interpreted as message passing over signed graphs.
Negative edges can repel nodes to a certain extent, providing deeper insights into how these methods mitigate oversmoothing.
- Score: 54.62268052283014
- License:
- Abstract: Oversmoothing is a common issue in graph neural networks (GNNs), where node representations become excessively homogeneous as the number of layers increases, resulting in degraded performance. Various strategies have been proposed to combat oversmoothing in practice, yet they are based on different heuristics and lack a unified understanding of their inherent mechanisms. In this paper, we show that three major classes of anti-oversmoothing techniques can be mathematically interpreted as message passing over signed graphs comprising both positive and negative edges. By analyzing the asymptotic behavior of signed graph propagation, we demonstrate that negative edges can repel nodes to a certain extent, providing deeper insights into how these methods mitigate oversmoothing. Furthermore, our results suggest that the structural balance of a signed graph-where positive edges exist only within clusters and negative edges appear only between clusters-is crucial for clustering node representations in the long term through signed graph propagation. Motivated by these observations, we propose a solution to mitigate oversmoothing with theoretical guarantees-Structural Balance Propagation (SBP), by incorporating label and feature information to create a structurally balanced graph for message-passing. Experiments on nine datasets against twelve baselines demonstrate the effectiveness of our method, highlighting the value of our signed graph perspective.
Related papers
- Effective and Lightweight Representation Learning for Link Sign Prediction in Signed Bipartite Graphs [3.996726993941017]
We propose ELISE, an effective and lightweight GNN-based approach for learning signed bipartite graphs.
We first extend personalized propagation to a signed bipartite graph, incorporating signed edges during message passing.
We then jointly learn node embeddings on a low-rank approximation of the signed bipartite graph, which reduces potential noise.
arXiv Detail & Related papers (2024-12-25T00:39:38Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Implicit vs Unfolded Graph Neural Networks [18.084842625063082]
Graph neural networks (GNNs) sometimes struggle to maintain a healthy balance between modeling long-range dependencies and avoiding unintended consequences.
Two separate strategies have recently been proposed, namely implicit and unfolded GNNs.
We provide empirical head-to-head comparisons across a variety of synthetic and public real-world benchmarks.
arXiv Detail & Related papers (2021-11-12T07:49:16Z) - Unbiased Graph Embedding with Biased Graph Observations [52.82841737832561]
We propose a principled new way for obtaining unbiased representations by learning from an underlying bias-free graph.
Based on this new perspective, we propose two complementary methods for uncovering such an underlying graph.
arXiv Detail & Related papers (2021-10-26T18:44:37Z) - Structure-Aware Hard Negative Mining for Heterogeneous Graph Contrastive
Learning [21.702342154458623]
This work investigates Contrastive Learning (CL) on Graph Neural Networks (GNNs)
We first generate multiple semantic views according to metapaths and network schemas.
We then push node embeddings corresponding to different semantic views close to each other (positives) and pulling other embeddings apart (negatives)
Considering the complex graph structure and the smoothing nature of GNNs, we propose a structure-aware hard negative mining scheme.
arXiv Detail & Related papers (2021-08-31T14:44:49Z) - Anisotropic Graph Convolutional Network for Semi-supervised Learning [7.843067454030999]
Graph convolutional networks learn effective node embeddings that have proven to be useful in achieving high-accuracy prediction results.
These networks suffer from the issue of over-smoothing and shrinking effect of the graph due in large part to the fact that they diffuse features across the edges of the graph using a linear Laplacian flow.
We propose an anisotropic graph convolutional network for semi-supervised node classification by introducing a nonlinear function that captures informative features from nodes, while preventing oversmoothing.
arXiv Detail & Related papers (2020-10-20T13:56:03Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - Interpretable Deep Graph Generation with Node-Edge Co-Disentanglement [55.2456981313287]
We propose a new disentanglement enhancement framework for deep generative models for attributed graphs.
A novel variational objective is proposed to disentangle the above three types of latent factors, with novel architecture for node and edge deconvolutions.
Within each type, individual-factor-wise disentanglement is further enhanced, which is shown to be a generalization of the existing framework for images.
arXiv Detail & Related papers (2020-06-09T16:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.