Node Injection Link Stealing Attack
- URL: http://arxiv.org/abs/2307.13548v1
- Date: Tue, 25 Jul 2023 14:51:01 GMT
- Title: Node Injection Link Stealing Attack
- Authors: Oualid Zari, Javier Parra-Arnau, Ay\c{s}e \"Unsal, Melek \"Onen
- Abstract summary: We present a stealthy and effective attack that exposes privacy vulnerabilities in Graph Neural Networks (GNNs) by inferring private links within graph-structured data.
Our work highlights the privacy vulnerabilities inherent in GNNs, underscoring the importance of developing robust privacy-preserving mechanisms for their application.
- Score: 0.649970685896541
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we present a stealthy and effective attack that exposes
privacy vulnerabilities in Graph Neural Networks (GNNs) by inferring private
links within graph-structured data. Focusing on the inductive setting where new
nodes join the graph and an API is used to query predictions, we investigate
the potential leakage of private edge information. We also propose methods to
preserve privacy while maintaining model utility. Our attack demonstrates
superior performance in inferring the links compared to the state of the art.
Furthermore, we examine the application of differential privacy (DP) mechanisms
to mitigate the impact of our proposed attack, we analyze the trade-off between
privacy preservation and model utility. Our work highlights the privacy
vulnerabilities inherent in GNNs, underscoring the importance of developing
robust privacy-preserving mechanisms for their application.
Related papers
- Unveiling Privacy Vulnerabilities: Investigating the Role of Structure in Graph Data [17.11821761700748]
This study advances the understanding and protection against privacy risks emanating from network structure.
We develop a novel graph private attribute inference attack, which acts as a pivotal tool for evaluating the potential for privacy leakage through network structures.
Our attack model poses a significant threat to user privacy, and our graph data publishing method successfully achieves the optimal privacy-utility trade-off.
arXiv Detail & Related papers (2024-07-26T07:40:54Z) - GNNBleed: Inference Attacks to Unveil Private Edges in Graphs with
Realistic Access to GNN Models [3.0509197593879844]
This paper investigates edge privacy in contexts where adversaries possess black-box GNN model access.
We introduce a series of privacy attacks grounded on the message-passing mechanism of GNNs.
arXiv Detail & Related papers (2023-11-03T20:26:03Z) - Blink: Link Local Differential Privacy in Graph Neural Networks via
Bayesian Estimation [79.64626707978418]
We propose using link local differential privacy over decentralized nodes to train graph neural networks.
Our approach spends the privacy budget separately on links and degrees of the graph for the server to better denoise the graph topology.
Our approach outperforms existing methods in terms of accuracy under varying privacy budgets.
arXiv Detail & Related papers (2023-09-06T17:53:31Z) - A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and
Applications [76.88662943995641]
Graph Neural Networks (GNNs) have gained significant attention owing to their ability to handle graph-structured data.
To address this issue, researchers have started to develop privacy-preserving GNNs.
Despite this progress, there is a lack of a comprehensive overview of the attacks and the techniques for preserving privacy in the graph domain.
arXiv Detail & Related papers (2023-08-31T00:31:08Z) - Unveiling the Role of Message Passing in Dual-Privacy Preservation on
GNNs [7.626349365968476]
Graph Neural Networks (GNNs) are powerful tools for learning representations on graphs, such as social networks.
Privacy-preserving GNNs have been proposed, focusing on preserving node and/or link privacy.
We propose a principled privacy-preserving GNN framework that effectively safeguards both node and link privacy.
arXiv Detail & Related papers (2023-08-25T17:46:43Z) - Independent Distribution Regularization for Private Graph Embedding [55.24441467292359]
Graph embeddings are susceptible to attribute inference attacks, which allow attackers to infer private node attributes from the learned graph embeddings.
To address these concerns, privacy-preserving graph embedding methods have emerged.
We propose a novel approach called Private Variational Graph AutoEncoders (PVGAE) with the aid of independent distribution penalty as a regularization term.
arXiv Detail & Related papers (2023-08-16T13:32:43Z) - Unraveling Privacy Risks of Individual Fairness in Graph Neural Networks [66.0143583366533]
Graph neural networks (GNNs) have gained significant attraction due to their expansive real-world applications.
To build trustworthy GNNs, two aspects - fairness and privacy - have emerged as critical considerations.
Previous studies have separately examined the fairness and privacy aspects of GNNs, revealing their trade-off with GNN performance.
Yet, the interplay between these two aspects remains unexplored.
arXiv Detail & Related papers (2023-01-30T14:52:23Z) - Releasing Graph Neural Networks with Differential Privacy Guarantees [0.81308403220442]
We propose PrivGNN, a privacy-preserving framework for releasing GNN models in a centralized setting.
PrivGNN combines the knowledge-distillation framework with the two noise mechanisms, random subsampling, and noisy labeling, to ensure rigorous privacy guarantees.
arXiv Detail & Related papers (2021-09-18T11:35:19Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.