A Signed Graph Approach to Understanding and Mitigating Oversmoothing in GNNs
- URL: http://arxiv.org/abs/2502.11394v2
- Date: Thu, 29 May 2025 08:12:40 GMT
- Title: A Signed Graph Approach to Understanding and Mitigating Oversmoothing in GNNs
- Authors: Jiaqi Wang, Xinyi Wu, James Cheng, Yifei Wang,
- Abstract summary: We present a unified theoretical perspective based on the framework of signed graphs.<n>We show that many existing strategies implicitly introduce negative edges that alter message-passing to resist oversmoothing.<n>We propose Structural Balanced Propagation (SBP), a plug-and-play method that assigns signed edges based on either labels or feature similarity.
- Score: 54.62268052283014
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep graph neural networks (GNNs) often suffer from oversmoothing, where node representations become overly homogeneous with increasing depth. While techniques like normalization, residual connections, and edge dropout have been proposed to mitigate oversmoothing, they are typically developed independently, with limited theoretical understanding of their underlying mechanisms. In this work, we present a unified theoretical perspective based on the framework of signed graphs, showing that many existing strategies implicitly introduce negative edges that alter message-passing to resist oversmoothing. However, we show that merely adding negative edges in an unstructured manner is insufficient-the asymptotic behavior of signed propagation depends critically on the strength and organization of positive and negative edges. To address this limitation, we leverage the theory of structural balance, which promotes stable, cluster-preserving dynamics by connecting similar nodes with positive edges and dissimilar ones with negative edges. We propose Structural Balanced Propagation (SBP), a plug-and-play method that assigns signed edges based on either labels or feature similarity to explicitly enhance structural balance in the constructed signed graphs. Experiments on nine benchmarks across both homophilic and heterophilic settings demonstrate that SBP consistently improves classification accuracy and mitigates oversmoothing, even at depths of up to 300 layers. Our results provide a principled explanation for prior oversmoothing remedies and introduce a new direction for signed message-passing design in deep GNNs.
Related papers
- ReDiSC: A Reparameterized Masked Diffusion Model for Scalable Node Classification with Structured Predictions [64.17845687013434]
We propose ReDiSC, a structured diffusion model for structured node classification.<n>We show that ReDiSC achieves superior or highly competitive performance compared to state-of-the-art GNN, label propagation, and diffusion-based baselines.<n> Notably, ReDiSC scales effectively to large-scale datasets on which previous structured diffusion methods fail due to computational constraints.
arXiv Detail & Related papers (2025-07-19T04:46:53Z) - Mitigating the Structural Bias in Graph Adversarial Defenses [25.511121574854872]
Recent studies have found that graph neural networks (GNNs) are susceptible to malicious adversarial attacks.<n>We propose a defense strategy by including hetero-homo augmented graph construction, $k$NN augmented graph construction, and multi-view node-wise attention modules.<n>We conduct extensive experiments to demonstrate the defense and debiasing effect of the proposed strategy on benchmark datasets.
arXiv Detail & Related papers (2025-04-29T15:19:05Z) - Effective and Lightweight Representation Learning for Link Sign Prediction in Signed Bipartite Graphs [3.996726993941017]
We propose ELISE, an effective and lightweight GNN-based approach for learning signed bipartite graphs.<n>We first extend personalized propagation to a signed bipartite graph, incorporating signed edges during message passing.<n>We then jointly learn node embeddings on a low-rank approximation of the signed bipartite graph, which reduces potential noise.
arXiv Detail & Related papers (2024-12-25T00:39:38Z) - Better Not to Propagate: Understanding Edge Uncertainty and Over-smoothing in Signed Graph Neural Networks [3.4498722449655066]
We propose a novel method for estimating homophily and edge error ratio, integrated with dynamic selection between blocked and signed propagation during training.
Our theoretical analysis, supported by extensive experiments, demonstrates that blocking MP can be more effective than signed propagation under high edge error ratios.
arXiv Detail & Related papers (2024-08-09T06:46:06Z) - Towards Inductive Robustness: Distilling and Fostering Wave-induced
Resonance in Transductive GCNs Against Graph Adversarial Attacks [56.56052273318443]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks, where slight perturbations in the graph structure can lead to erroneous predictions.
Here, we discover that transductive GCNs inherently possess a distillable robustness, achieved through a wave-induced resonance process.
We present Graph Resonance-fostering Network (GRN) to foster this resonance via learning node representations.
arXiv Detail & Related papers (2023-12-14T04:25:50Z) - Balancing Augmentation with Edge-Utility Filter for Signed GNNs [0.20482269513546458]
Signed graph neural networks (SGNNs) has recently drawn more attention as many real-world networks are signed networks containing two types of edges: positive and negative.
The existence of negative edges affects the SGNN robustness on two aspects. One is the semantic imbalance as the negative edges are hard to obtain though they can provide potentially useful information.
In this paper, we propose a balancing augmentation method to address the above two aspects for SGNNs. Firstly, the utility of each negative edge is measured by calculating its occurrence in unbalanced structures. Secondly, the original signed graph is selectively augmented with the use of (1) an edge regulator
arXiv Detail & Related papers (2023-10-25T07:15:01Z) - Efficient Link Prediction via GNN Layers Induced by Negative Sampling [86.87385758192566]
Graph neural networks (GNNs) for link prediction can loosely be divided into two broad categories.<n>We propose a novel GNN architecture whereby the emphforward pass explicitly depends on emphboth positive (as is typical) and negative (unique to our approach) edges.<n>This is achieved by recasting the embeddings themselves as minimizers of a forward-pass-specific energy function that favors separation of positive and negative samples.
arXiv Detail & Related papers (2023-10-14T07:02:54Z) - Improving Signed Propagation for Graph Neural Networks in Multi-Class Environments [3.4498722449655066]
We introduce two novel strategies for improving signed propagation under multi-class graphs.
The proposed scheme combines calibration to secure robustness while reducing uncertainty.
We show the efficacy of our theorem through extensive experiments on six benchmark graph datasets.
arXiv Detail & Related papers (2023-01-21T08:47:22Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Implicit vs Unfolded Graph Neural Networks [18.084842625063082]
Graph neural networks (GNNs) sometimes struggle to maintain a healthy balance between modeling long-range dependencies and avoiding unintended consequences.
Two separate strategies have recently been proposed, namely implicit and unfolded GNNs.
We provide empirical head-to-head comparisons across a variety of synthetic and public real-world benchmarks.
arXiv Detail & Related papers (2021-11-12T07:49:16Z) - Unbiased Graph Embedding with Biased Graph Observations [52.82841737832561]
We propose a principled new way for obtaining unbiased representations by learning from an underlying bias-free graph.
Based on this new perspective, we propose two complementary methods for uncovering such an underlying graph.
arXiv Detail & Related papers (2021-10-26T18:44:37Z) - Structure-Aware Hard Negative Mining for Heterogeneous Graph Contrastive
Learning [21.702342154458623]
This work investigates Contrastive Learning (CL) on Graph Neural Networks (GNNs)
We first generate multiple semantic views according to metapaths and network schemas.
We then push node embeddings corresponding to different semantic views close to each other (positives) and pulling other embeddings apart (negatives)
Considering the complex graph structure and the smoothing nature of GNNs, we propose a structure-aware hard negative mining scheme.
arXiv Detail & Related papers (2021-08-31T14:44:49Z) - Adversarial Graph Disentanglement [47.27978741175575]
A real-world graph has a complex topological structure, which is often formed by the interaction of different latent factors.
We propose an underlinetextbfAdversarial underlinetextbfDisentangled underlinetextbfGraph underlinetextbfConvolutional underlinetextbfNetwork (ADGCN) for disentangled graph representation learning.
arXiv Detail & Related papers (2021-03-12T14:11:36Z) - Graph Neural Networks Inspired by Classical Iterative Algorithms [28.528150667063876]
We consider a new family of GNN layers designed to mimic and integrate the update rules of two classical iterative algorithms.
A novel attention mechanism is explicitly anchored to an underlying end-toend energy function, contributing stability with respect to edge uncertainty.
arXiv Detail & Related papers (2021-03-10T14:08:12Z) - Interpretable Signed Link Prediction with Signed Infomax Hyperbolic
Graph [54.03786611989613]
signed link prediction in social networks aims to reveal the underlying relationships (i.e. links) among users (i.e. nodes)
We develop a unified framework, termed as Signed Infomax Hyperbolic Graph (textbfSIHG)
In order to model high-order user relations and complex hierarchies, the node embeddings are projected and measured in a hyperbolic space with a lower distortion.
arXiv Detail & Related papers (2020-11-25T05:09:03Z) - Anisotropic Graph Convolutional Network for Semi-supervised Learning [7.843067454030999]
Graph convolutional networks learn effective node embeddings that have proven to be useful in achieving high-accuracy prediction results.
These networks suffer from the issue of over-smoothing and shrinking effect of the graph due in large part to the fact that they diffuse features across the edges of the graph using a linear Laplacian flow.
We propose an anisotropic graph convolutional network for semi-supervised node classification by introducing a nonlinear function that captures informative features from nodes, while preventing oversmoothing.
arXiv Detail & Related papers (2020-10-20T13:56:03Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - Interpretable Deep Graph Generation with Node-Edge Co-Disentanglement [55.2456981313287]
We propose a new disentanglement enhancement framework for deep generative models for attributed graphs.
A novel variational objective is proposed to disentangle the above three types of latent factors, with novel architecture for node and edge deconvolutions.
Within each type, individual-factor-wise disentanglement is further enhanced, which is shown to be a generalization of the existing framework for images.
arXiv Detail & Related papers (2020-06-09T16:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.