Community detection robustness of graph neural networks
- URL: http://arxiv.org/abs/2509.24662v1
- Date: Mon, 29 Sep 2025 12:08:22 GMT
- Title: Community detection robustness of graph neural networks
- Authors: Jaidev Goel, Pablo Moriano, Ramakrishnan Kannan, Yulia R. Gel,
- Abstract summary: Graph neural networks (GNNs) are increasingly used for community detection in attributed networks.<n>We evaluate six widely adopted GNN architectures: GCN, GAT, Graph- SAGE, DiffPool, MinCUT, and DMoN.<n> supervised GNNs tend to achieve higher baseline accuracy, while unsupervised methods, particularly DMoN, maintain stronger resilience.
- Score: 19.213455157528912
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks (GNNs) are increasingly widely used for community detection in attributed networks. They combine structural topology with node attributes through message passing and pooling. However, their robustness or lack of thereof with respect to different perturbations and targeted attacks in conjunction with community detection tasks is not well understood. To shed light into latent mechanisms behind GNN sensitivity on community detection tasks, we conduct a systematic computational evaluation of six widely adopted GNN architectures: GCN, GAT, Graph- SAGE, DiffPool, MinCUT, and DMoN. The analysis covers three perturbation categories: node attribute manipulations, edge topology distortions, and adversarial attacks. We use element-centric similarity as the evaluation metric on synthetic benchmarks and real-world citation networks. Our findings indicate that supervised GNNs tend to achieve higher baseline accuracy, while unsupervised methods, particularly DMoN, maintain stronger resilience under targeted and adversarial pertur- bations. Furthermore, robustness appears to be strongly influenced by community strength, with well-defined communities reducing performance loss. Across all models, node attribute perturba- tions associated with targeted edge deletions and shift in attribute distributions tend to cause the largest degradation in community recovery. These findings highlight important trade-offs between accuracy and robustness in GNN-based community detection and offer new insights into selecting architectures resilient to noise and adversarial attacks.
Related papers
- Pruning Graphs by Adversarial Robustness Evaluation to Strengthen GNN Defenses [1.066048003460524]
Graph Neural Networks (GNNs) have emerged as a dominant paradigm for learning on graph-structured data.<n>In this work, we introduce a pruning framework that leverages adversarial robustness evaluation to explicitly identify and remove detrimental components of the graph.<n>By using robustness scores as guidance, our method selectively prunes edges that are most likely to degrade model reliability, thereby yielding cleaner and more resilient graph representations.
arXiv Detail & Related papers (2025-11-29T20:15:54Z) - Non-Dissipative Graph Propagation for Non-Local Community Detection [14.99394337842476]
We introduce the Unsupervised Antisymmetric Graph Neural Network (uAGNN), a novel unsupervised community detection approach.<n>We show uAGNN's superior performance in high and medium heterophilic settings, where traditional methods fail to exploit long-range dependencies.<n>These results highlight uAGNN's potential as a powerful tool for unsupervised community detection in diverse graph environments.
arXiv Detail & Related papers (2025-08-15T12:26:48Z) - Grimm: A Plug-and-Play Perturbation Rectifier for Graph Neural Networks Defending against Poisoning Attacks [53.972077392749185]
Recent studies have revealed the vulnerability of graph neural networks (GNNs) to adversarial poisoning attacks on node classification tasks.<n>Here we introduce Grimm, the first plug-and-play defense model.
arXiv Detail & Related papers (2024-12-11T17:17:02Z) - RW-NSGCN: A Robust Approach to Structural Attacks via Negative Sampling [10.124585385676376]
Graph Neural Networks (GNNs) have been widely applied in various practical scenarios, such as predicting user interests and detecting communities in social networks.
Recent studies have shown that graph-structured networks often contain potential noise and attacks, in the form of topological perturbations and weight disturbances.
We propose a novel method: Random Walk Negative Sampling Graph Conal Network (RW-NSGCN). Specifically, RW-NSGCN integrates the Random Walk with Restart (RWR) and PageRank algorithms for negative sampling and employs a Determinantal Point Process (DPP)-based GCN for convolution operations.
arXiv Detail & Related papers (2024-08-13T06:34:56Z) - Advanced Financial Fraud Detection Using GNN-CL Model [13.5240775562349]
The innovative GNN-CL model proposed in this paper marks a breakthrough in the field of financial fraud detection.
It combines the advantages of graph neural networks (gnn), convolutional neural networks (cnn) and long short-term memory (LSTM) networks.
A key novelty of this paper is the use of multilayer perceptrons (MLPS) to estimate node similarity.
arXiv Detail & Related papers (2024-07-09T03:59:06Z) - HGAttack: Transferable Heterogeneous Graph Adversarial Attack [63.35560741500611]
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-01-18T12:47:13Z) - Graph Agent Network: Empowering Nodes with Inference Capabilities for Adversarial Resilience [50.460555688927826]
We propose the Graph Agent Network (GAgN) to address the vulnerabilities of graph neural networks (GNNs)<n>GAgN is a graph-structured agent network in which each node is designed as an 1-hop-view agent.<n>Agents' limited view prevents malicious messages from propagating globally in GAgN, thereby resisting global-optimization-based secondary attacks.
arXiv Detail & Related papers (2023-06-12T07:27:31Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - A Systematic Evaluation of Node Embedding Robustness [77.29026280120277]
We assess the empirical robustness of node embedding models to random and adversarial poisoning attacks.
We compare edge addition, deletion and rewiring strategies computed using network properties as well as node labels.
We found that node classification suffers from higher performance degradation as opposed to network reconstruction.
arXiv Detail & Related papers (2022-09-16T17:20:23Z) - Detecting Topology Attacks against Graph Neural Networks [39.968619861265395]
We study the victim node detection problem under topology attacks against GNNs.
Our approach is built upon the key observation rooted in the intrinsic message passing nature of GNNs.
arXiv Detail & Related papers (2022-04-21T13:08:25Z) - CAP: Co-Adversarial Perturbation on Weights and Features for Improving
Generalization of Graph Neural Networks [59.692017490560275]
Adversarial training has been widely demonstrated to improve model's robustness against adversarial attacks.
It remains unclear how the adversarial training could improve the generalization abilities of GNNs in the graph analytics problem.
We construct the co-adversarial perturbation (CAP) optimization problem in terms of weights and features, and design the alternating adversarial perturbation algorithm to flatten the weight and feature loss landscapes alternately.
arXiv Detail & Related papers (2021-10-28T02:28:13Z) - Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning
Attacks [43.60973654460398]
Graph Neural Networks (GNNs) are generalizations of neural networks to graph-structured data.
GNNs are vulnerable to adversarial attacks, i.e., a small perturbation to the structure can lead to a non-trivial performance degradation.
We propose Uncertainty Matching GNN (UM-GNN), that is aimed at improving the robustness of GNN models.
arXiv Detail & Related papers (2020-09-30T05:29:42Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.