Releasing Graph Neural Networks with Differential Privacy Guarantees
- URL: http://arxiv.org/abs/2109.08907v2
- Date: Thu, 2 Nov 2023 05:37:09 GMT
- Title: Releasing Graph Neural Networks with Differential Privacy Guarantees
- Authors: Iyiola E. Olatunji, Thorben Funke, and Megha Khosla
- Abstract summary: We propose PrivGNN, a privacy-preserving framework for releasing GNN models in a centralized setting.
PrivGNN combines the knowledge-distillation framework with the two noise mechanisms, random subsampling, and noisy labeling, to ensure rigorous privacy guarantees.
- Score: 0.81308403220442
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the increasing popularity of graph neural networks (GNNs) in several
sensitive applications like healthcare and medicine, concerns have been raised
over the privacy aspects of trained GNNs. More notably, GNNs are vulnerable to
privacy attacks, such as membership inference attacks, even if only black-box
access to the trained model is granted. We propose PrivGNN, a
privacy-preserving framework for releasing GNN models in a centralized setting.
Assuming an access to a public unlabeled graph, PrivGNN provides a framework to
release GNN models trained explicitly on public data along with knowledge
obtained from the private data in a privacy preserving manner. PrivGNN combines
the knowledge-distillation framework with the two noise mechanisms, random
subsampling, and noisy labeling, to ensure rigorous privacy guarantees. We
theoretically analyze our approach in the Renyi differential privacy framework.
Besides, we show the solid experimental performance of our method compared to
several baselines adapted for graph-structured data. Our code is available at
https://github.com/iyempissy/privGnn.
Related papers
- Link Stealing Attacks Against Inductive Graph Neural Networks [60.931106032824275]
A graph neural network (GNN) is a type of neural network that is specifically designed to process graph-structured data.
Previous work has shown that transductive GNNs are vulnerable to a series of privacy attacks.
This paper conducts a comprehensive privacy analysis of inductive GNNs through the lens of link stealing attacks.
arXiv Detail & Related papers (2024-05-09T14:03:52Z) - Blink: Link Local Differential Privacy in Graph Neural Networks via
Bayesian Estimation [79.64626707978418]
We propose using link local differential privacy over decentralized nodes to train graph neural networks.
Our approach spends the privacy budget separately on links and degrees of the graph for the server to better denoise the graph topology.
Our approach outperforms existing methods in terms of accuracy under varying privacy budgets.
arXiv Detail & Related papers (2023-09-06T17:53:31Z) - A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and
Applications [76.88662943995641]
Graph Neural Networks (GNNs) have gained significant attention owing to their ability to handle graph-structured data.
To address this issue, researchers have started to develop privacy-preserving GNNs.
Despite this progress, there is a lack of a comprehensive overview of the attacks and the techniques for preserving privacy in the graph domain.
arXiv Detail & Related papers (2023-08-31T00:31:08Z) - Unveiling the Role of Message Passing in Dual-Privacy Preservation on
GNNs [7.626349365968476]
Graph Neural Networks (GNNs) are powerful tools for learning representations on graphs, such as social networks.
Privacy-preserving GNNs have been proposed, focusing on preserving node and/or link privacy.
We propose a principled privacy-preserving GNN framework that effectively safeguards both node and link privacy.
arXiv Detail & Related papers (2023-08-25T17:46:43Z) - Differentially Private Graph Neural Network with Importance-Grained
Noise Adaption [6.319864669924721]
Graph Neural Networks (GNNs) with differential privacy have been proposed to preserve graph privacy when nodes represent personal and sensitive information.
We study the problem of importance-grained privacy, where nodes contain personal data that need to be kept private but are critical for training a GNN.
We propose NAP-GNN, a node-grained privacy-preserving GNN algorithm with privacy guarantees based on adaptive differential privacy to safeguard node information.
arXiv Detail & Related papers (2023-08-09T13:18:41Z) - Node Injection Link Stealing Attack [0.649970685896541]
We present a stealthy and effective attack that exposes privacy vulnerabilities in Graph Neural Networks (GNNs) by inferring private links within graph-structured data.
Our work highlights the privacy vulnerabilities inherent in GNNs, underscoring the importance of developing robust privacy-preserving mechanisms for their application.
arXiv Detail & Related papers (2023-07-25T14:51:01Z) - ProGAP: Progressive Graph Neural Networks with Differential Privacy
Guarantees [8.79398901328539]
Graph Neural Networks (GNNs) have become a popular tool for learning on graphs, but their widespread use raises privacy concerns.
We propose a new differentially private GNN called ProGAP that uses a progressive training scheme to improve such accuracy-privacy trade-offs.
arXiv Detail & Related papers (2023-04-18T12:08:41Z) - Unraveling Privacy Risks of Individual Fairness in Graph Neural Networks [66.0143583366533]
Graph neural networks (GNNs) have gained significant attraction due to their expansive real-world applications.
To build trustworthy GNNs, two aspects - fairness and privacy - have emerged as critical considerations.
Previous studies have separately examined the fairness and privacy aspects of GNNs, revealing their trade-off with GNN performance.
Yet, the interplay between these two aspects remains unexplored.
arXiv Detail & Related papers (2023-01-30T14:52:23Z) - Towards Private Learning on Decentralized Graphs with Local Differential
Privacy [45.47822758278652]
em Solitude is a new privacy-preserving learning framework based on graph neural networks (GNNs)
Our new framework can simultaneously protect node feature privacy and edge privacy, and can seamlessly incorporate with any GNN with privacy-utility guarantees.
arXiv Detail & Related papers (2022-01-23T23:20:56Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.