A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and
Applications
- URL: http://arxiv.org/abs/2308.16375v3
- Date: Tue, 19 Sep 2023 15:00:52 GMT
- Title: A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and
Applications
- Authors: Yi Zhang, Yuying Zhao, Zhaoqing Li, Xueqi Cheng, Yu Wang, Olivera
Kotevska, Philip S. Yu, Tyler Derr
- Abstract summary: Graph Neural Networks (GNNs) have gained significant attention owing to their ability to handle graph-structured data.
To address this issue, researchers have started to develop privacy-preserving GNNs.
Despite this progress, there is a lack of a comprehensive overview of the attacks and the techniques for preserving privacy in the graph domain.
- Score: 76.88662943995641
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have gained significant attention owing to their
ability to handle graph-structured data and the improvement in practical
applications. However, many of these models prioritize high utility
performance, such as accuracy, with a lack of privacy consideration, which is a
major concern in modern society where privacy attacks are rampant. To address
this issue, researchers have started to develop privacy-preserving GNNs.
Despite this progress, there is a lack of a comprehensive overview of the
attacks and the techniques for preserving privacy in the graph domain. In this
survey, we aim to address this gap by summarizing the attacks on graph data
according to the targeted information, categorizing the privacy preservation
techniques in GNNs, and reviewing the datasets and applications that could be
used for analyzing/solving privacy issues in GNNs. We also outline potential
directions for future research in order to build better privacy-preserving
GNNs.
Related papers
- Unveiling Privacy Vulnerabilities: Investigating the Role of Structure in Graph Data [17.11821761700748]
This study advances the understanding and protection against privacy risks emanating from network structure.
We develop a novel graph private attribute inference attack, which acts as a pivotal tool for evaluating the potential for privacy leakage through network structures.
Our attack model poses a significant threat to user privacy, and our graph data publishing method successfully achieves the optimal privacy-utility trade-off.
arXiv Detail & Related papers (2024-07-26T07:40:54Z) - GNNBleed: Inference Attacks to Unveil Private Edges in Graphs with
Realistic Access to GNN Models [3.0509197593879844]
This paper investigates edge privacy in contexts where adversaries possess black-box GNN model access.
We introduce a series of privacy attacks grounded on the message-passing mechanism of GNNs.
arXiv Detail & Related papers (2023-11-03T20:26:03Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - Node Injection Link Stealing Attack [0.649970685896541]
We present a stealthy and effective attack that exposes privacy vulnerabilities in Graph Neural Networks (GNNs) by inferring private links within graph-structured data.
Our work highlights the privacy vulnerabilities inherent in GNNs, underscoring the importance of developing robust privacy-preserving mechanisms for their application.
arXiv Detail & Related papers (2023-07-25T14:51:01Z) - A Unified Framework of Graph Information Bottleneck for Robustness and
Membership Privacy [43.11374582152925]
Graph Neural Networks (GNNs) have achieved great success in modeling graph-structured data.
GNNs are vulnerable to adversarial attacks which can fool the GNN model to make desired predictions.
In this work, we study a novel problem of developing robust and membership privacy-preserving GNNs.
arXiv Detail & Related papers (2023-06-14T16:11:00Z) - Unraveling Privacy Risks of Individual Fairness in Graph Neural Networks [66.0143583366533]
Graph neural networks (GNNs) have gained significant attraction due to their expansive real-world applications.
To build trustworthy GNNs, two aspects - fairness and privacy - have emerged as critical considerations.
Previous studies have separately examined the fairness and privacy aspects of GNNs, revealing their trade-off with GNN performance.
Yet, the interplay between these two aspects remains unexplored.
arXiv Detail & Related papers (2023-01-30T14:52:23Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy,
Robustness, Fairness, and Explainability [59.80140875337769]
Graph Neural Networks (GNNs) have made rapid developments in the recent years.
GNNs can leak private information, are vulnerable to adversarial attacks, can inherit and magnify societal bias from training data.
This paper gives a comprehensive survey of GNNs in the computational aspects of privacy, robustness, fairness, and explainability.
arXiv Detail & Related papers (2022-04-18T21:41:07Z) - Releasing Graph Neural Networks with Differential Privacy Guarantees [0.81308403220442]
We propose PrivGNN, a privacy-preserving framework for releasing GNN models in a centralized setting.
PrivGNN combines the knowledge-distillation framework with the two noise mechanisms, random subsampling, and noisy labeling, to ensure rigorous privacy guarantees.
arXiv Detail & Related papers (2021-09-18T11:35:19Z) - Adversarial Attacks and Defenses on Graphs: A Review, A Tool and
Empirical Studies [73.39668293190019]
Adversary attacks can be easily fooled by small perturbation on the input.
Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability.
In this survey, we categorize existing attacks and defenses, and review the corresponding state-of-the-art methods.
arXiv Detail & Related papers (2020-03-02T04:32:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.