Evading Overlapping Community Detection via Proxy Node Injection
- URL: http://arxiv.org/abs/2509.21211v1
- Date: Thu, 25 Sep 2025 14:21:16 GMT
- Title: Evading Overlapping Community Detection via Proxy Node Injection
- Authors: Dario Loi, Matteo Silvestri, Fabrizio Silvestri, Gabriele Tolomei,
- Abstract summary: We address the problem of emphcommunity membership hiding (CMH), which seeks edge modifications that cause a target node to exit its original community.<n>We propose a deep reinforcement learning approach that learns effective modification policies, including the use of proxy nodes, while preserving graph structure.
- Score: 11.711770540233337
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Protecting privacy in social graphs requires preventing sensitive information, such as community affiliations, from being inferred by graph analysis, without substantially altering the graph topology. We address this through the problem of \emph{community membership hiding} (CMH), which seeks edge modifications that cause a target node to exit its original community, regardless of the detection algorithm employed. Prior work has focused on non-overlapping community detection, where trivial strategies often suffice, but real-world graphs are better modeled by overlapping communities, where such strategies fail. To the best of our knowledge, we are the first to formalize and address CMH in this setting. In this work, we propose a deep reinforcement learning (DRL) approach that learns effective modification policies, including the use of proxy nodes, while preserving graph structure. Experiments on real-world datasets show that our method significantly outperforms existing baselines in both effectiveness and efficiency, offering a principled tool for privacy-preserving graph modification with overlapping communities.
Related papers
- Non-Dissipative Graph Propagation for Non-Local Community Detection [14.99394337842476]
We introduce the Unsupervised Antisymmetric Graph Neural Network (uAGNN), a novel unsupervised community detection approach.<n>We show uAGNN's superior performance in high and medium heterophilic settings, where traditional methods fail to exploit long-range dependencies.<n>These results highlight uAGNN's potential as a powerful tool for unsupervised community detection in diverse graph environments.
arXiv Detail & Related papers (2025-08-15T12:26:48Z) - Cluster-Aware Attacks on Graph Watermarks [50.19105800063768]
We introduce a cluster-aware threat model in which adversaries apply community-guided modifications to evade detection.<n>Our results show that cluster-aware attacks can reduce attribution accuracy by up to 80% more than random baselines.<n>We propose a lightweight embedding enhancement that distributes watermark nodes across graph communities.
arXiv Detail & Related papers (2025-04-24T22:49:28Z) - GraphCloak: Safeguarding Task-specific Knowledge within Graph-structured Data from Unauthorized Exploitation [61.80017550099027]
Graph Neural Networks (GNNs) are increasingly prevalent in a variety of fields.
Growing concerns have emerged regarding the unauthorized utilization of personal data.
Recent studies have shown that imperceptible poisoning attacks are an effective method of protecting image data from such misuse.
This paper introduces GraphCloak to safeguard against the unauthorized usage of graph data.
arXiv Detail & Related papers (2023-10-11T00:50:55Z) - BOURNE: Bootstrapped Self-supervised Learning Framework for Unified
Graph Anomaly Detection [50.26074811655596]
We propose a novel unified graph anomaly detection framework based on bootstrapped self-supervised learning (named BOURNE)
By swapping the context embeddings between nodes and edges, we enable the mutual detection of node and edge anomalies.
BOURNE can eliminate the need for negative sampling, thereby enhancing its efficiency in handling large graphs.
arXiv Detail & Related papers (2023-07-28T00:44:57Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - Unsupervised Constrained Community Detection via Self-Expressive Graph
Neural Network [17.209458751421018]
Graph neural networks (GNNs) are able to achieve promising performance on multiple graph downstream tasks such as node classification and link prediction.
Traditionally, GNNs are trained on a semi-supervised or self-supervised loss function and then clustering algorithms are applied to detect communities.
Our solution is trained in an end-to-end fashion and achieves state-of-the-art community detection performance on multiple publicly available datasets.
arXiv Detail & Related papers (2020-11-28T07:17:30Z) - Adversarial Attack on Community Detection by Hiding Individuals [68.76889102470203]
We focus on black-box attack and aim to hide targeted individuals from the detection of deep graph community detection models.
We propose an iterative learning framework that takes turns to update two modules: one working as the constrained graph generator and the other as the surrogate community detection model.
arXiv Detail & Related papers (2020-01-22T09:50:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.