Devil's Hand: Data Poisoning Attacks to Locally Private Graph Learning Protocols
- URL: http://arxiv.org/abs/2506.09803v2
- Date: Thu, 26 Jun 2025 14:18:21 GMT
- Title: Devil's Hand: Data Poisoning Attacks to Locally Private Graph Learning Protocols
- Authors: Longzhu He, Chaozhuo Li, Peng Tang, Li Sun, Sen Su, Philip S. Yu,
- Abstract summary: This work introduces the first data poisoning attack targeting locally private graph learning protocols.<n>The attacker injects fake users into the protocol, manipulates these fake users to establish links with genuine users, and sends carefully crafted data to the server.<n>The effectiveness of the attack is demonstrated both theoretically and empirically.
- Score: 46.94619400437805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) have achieved significant success in graph representation learning and have been applied to various domains. However, many real-world graphs contain sensitive personal information, such as user profiles in social networks, raising serious privacy concerns when graph learning is performed using GNNs. To address this issue, locally private graph learning protocols have gained considerable attention. These protocols leverage the privacy advantages of local differential privacy (LDP) and the effectiveness of GNN's message-passing in calibrating noisy data, offering strict privacy guarantees for users' local data while maintaining high utility (e.g., node classification accuracy) for graph learning. Despite these advantages, such protocols may be vulnerable to data poisoning attacks, a threat that has not been considered in previous research. Identifying and addressing these threats is crucial for ensuring the robustness and security of privacy-preserving graph learning frameworks. This work introduces the first data poisoning attack targeting locally private graph learning protocols. The attacker injects fake users into the protocol, manipulates these fake users to establish links with genuine users, and sends carefully crafted data to the server, ultimately compromising the utility of private graph learning. The effectiveness of the attack is demonstrated both theoretically and empirically. In addition, several defense strategies have also been explored, but their limited effectiveness highlights the need for more robust defenses.
Related papers
- Learning-based Privacy-Preserving Graph Publishing Against Sensitive Link Inference Attacks [14.766917415961348]
We propose the first privacy-preserving graph structure learning framework against sensitive link inference attacks.<n>The framework, named PPGSL, can automatically learn a graph with the optimal privacy--utility trade-off.<n>The PPGSL achieves state-of-the-art privacy--utility trade-off performance and effectively thwarts various sensitive link inference attacks.
arXiv Detail & Related papers (2025-07-23T04:19:29Z) - DP-GPL: Differentially Private Graph Prompt Learning [8.885929731174492]
We propose DP-GPL for differentially private graph prompt learning based on the PATE framework.<n>We show that our algorithm achieves high utility at strong privacy, effectively mitigating privacy concerns.
arXiv Detail & Related papers (2025-03-13T16:58:07Z) - GraphGuard: Detecting and Counteracting Training Data Misuse in Graph
Neural Networks [69.97213941893351]
The emergence of Graph Neural Networks (GNNs) in graph data analysis has raised critical concerns about data misuse during model training.
Existing methodologies address either data misuse detection or mitigation, and are primarily designed for local GNN models.
This paper introduces a pioneering approach called GraphGuard, to tackle these challenges.
arXiv Detail & Related papers (2023-12-13T02:59:37Z) - A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and
Applications [76.88662943995641]
Graph Neural Networks (GNNs) have gained significant attention owing to their ability to handle graph-structured data.
To address this issue, researchers have started to develop privacy-preserving GNNs.
Despite this progress, there is a lack of a comprehensive overview of the attacks and the techniques for preserving privacy in the graph domain.
arXiv Detail & Related papers (2023-08-31T00:31:08Z) - Independent Distribution Regularization for Private Graph Embedding [55.24441467292359]
Graph embeddings are susceptible to attribute inference attacks, which allow attackers to infer private node attributes from the learned graph embeddings.
To address these concerns, privacy-preserving graph embedding methods have emerged.
We propose a novel approach called Private Variational Graph AutoEncoders (PVGAE) with the aid of independent distribution penalty as a regularization term.
arXiv Detail & Related papers (2023-08-16T13:32:43Z) - Privacy-Preserved Neural Graph Similarity Learning [99.78599103903777]
We propose a novel Privacy-Preserving neural Graph Matching network model, named PPGM, for graph similarity learning.
To prevent reconstruction attacks, the proposed model does not communicate node-level representations between devices.
To alleviate the attacks to graph properties, the obfuscated features that contain information from both vectors are communicated.
arXiv Detail & Related papers (2022-10-21T04:38:25Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - NetFense: Adversarial Defenses against Privacy Attacks on Neural
Networks for Graph Data [10.609715843964263]
We propose a novel research task, adversarial defenses against GNN-based privacy attacks.
We present a graph perturbation-based approach, NetFense, to achieve the goal.
arXiv Detail & Related papers (2021-06-22T15:32:50Z) - Adversarial Privacy Preserving Graph Embedding against Inference Attack [9.90348608491218]
Graph embedding has been proved extremely useful to learn low-dimensional feature representations from graph structured data.
Existing graph embedding methods do not consider users' privacy to prevent inference attacks.
We propose Adrial Privacy Graph Embedding (APGE), a graph adversarial training framework that integrates the disentangling and purging mechanisms to remove users' private information from learned node representations.
arXiv Detail & Related papers (2020-08-30T00:06:49Z) - Locally Private Graph Neural Networks [12.473486843211573]
We study the problem of node data privacy, where graph nodes have potentially sensitive data that is kept private.
We develop a privacy-preserving, architecture-agnostic GNN learning algorithm with formal privacy guarantees.
Experiments conducted over real-world datasets demonstrate that our method can maintain a satisfying level of accuracy with low privacy loss.
arXiv Detail & Related papers (2020-06-09T22:36:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.