Blink: Link Local Differential Privacy in Graph Neural Networks via
Bayesian Estimation
- URL: http://arxiv.org/abs/2309.03190v2
- Date: Thu, 7 Sep 2023 08:28:29 GMT
- Title: Blink: Link Local Differential Privacy in Graph Neural Networks via
Bayesian Estimation
- Authors: Xiaochen Zhu, Vincent Y. F. Tan, Xiaokui Xiao
- Abstract summary: We propose using link local differential privacy over decentralized nodes to train graph neural networks.
Our approach spends the privacy budget separately on links and degrees of the graph for the server to better denoise the graph topology.
Our approach outperforms existing methods in terms of accuracy under varying privacy budgets.
- Score: 79.64626707978418
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks (GNNs) have gained an increasing amount of popularity
due to their superior capability in learning node embeddings for various graph
inference tasks, but training them can raise privacy concerns. To address this,
we propose using link local differential privacy over decentralized nodes,
enabling collaboration with an untrusted server to train GNNs without revealing
the existence of any link. Our approach spends the privacy budget separately on
links and degrees of the graph for the server to better denoise the graph
topology using Bayesian estimation, alleviating the negative impact of LDP on
the accuracy of the trained GNNs. We bound the mean absolute error of the
inferred link probabilities against the ground truth graph topology. We then
propose two variants of our LDP mechanism complementing each other in different
privacy settings, one of which estimates fewer links under lower privacy
budgets to avoid false positive link estimates when the uncertainty is high,
while the other utilizes more information and performs better given relatively
higher privacy budgets. Furthermore, we propose a hybrid variant that combines
both strategies and is able to perform better across different privacy budgets.
Extensive experiments show that our approach outperforms existing methods in
terms of accuracy under varying privacy budgets.
Related papers
- Privacy Preserving Semi-Decentralized Mean Estimation over Intermittently-Connected Networks [59.43433767253956]
We consider the problem of privately estimating the mean of vectors distributed across different nodes of an unreliable wireless network.
In a semi-decentralized setup, nodes can collaborate with their neighbors to compute a local consensus, which they relay to a central server.
We study the tradeoff between collaborative relaying and privacy leakage due to the data sharing among nodes.
arXiv Detail & Related papers (2024-06-06T06:12:15Z) - A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and
Applications [76.88662943995641]
Graph Neural Networks (GNNs) have gained significant attention owing to their ability to handle graph-structured data.
To address this issue, researchers have started to develop privacy-preserving GNNs.
Despite this progress, there is a lack of a comprehensive overview of the attacks and the techniques for preserving privacy in the graph domain.
arXiv Detail & Related papers (2023-08-31T00:31:08Z) - Differentially Private Decoupled Graph Convolutions for Multigranular
Topology Protection [38.96828804683783]
GNNs can inadvertently expose sensitive user information and interactions through their model predictions.
Applying standard DP approaches to GNNs directly is not advisable due to two main reasons.
We propose a new framework termed Graph Differential Privacy (GDP), specifically tailored to graph learning.
arXiv Detail & Related papers (2023-07-12T19:29:06Z) - Heterogeneous Randomized Response for Differential Privacy in Graph
Neural Networks [18.4005860362025]
Graph neural networks (GNNs) are susceptible to privacy inference attacks (PIAs)
We propose a novel mechanism to protect nodes' features and edges against PIAs under differential privacy (DP) guarantees.
We derive significantly better randomization probabilities and tighter error bounds at both levels of nodes' features and edges.
arXiv Detail & Related papers (2022-11-10T18:52:46Z) - Privacy-Preserved Neural Graph Similarity Learning [99.78599103903777]
We propose a novel Privacy-Preserving neural Graph Matching network model, named PPGM, for graph similarity learning.
To prevent reconstruction attacks, the proposed model does not communicate node-level representations between devices.
To alleviate the attacks to graph properties, the obfuscated features that contain information from both vectors are communicated.
arXiv Detail & Related papers (2022-10-21T04:38:25Z) - TAN Without a Burn: Scaling Laws of DP-SGD [70.7364032297978]
Differentially Private methods for training Deep Neural Networks (DNNs) have progressed recently.
We decouple privacy analysis and experimental behavior of noisy training to explore the trade-off with minimal computational requirements.
We apply the proposed method on CIFAR-10 and ImageNet and, in particular, strongly improve the state-of-the-art on ImageNet with a +9 points gain in top-1 accuracy.
arXiv Detail & Related papers (2022-10-07T08:44:35Z) - Heterogeneous Graph Neural Network for Privacy-Preserving Recommendation [25.95411320126426]
Social networks are considered to be heterogeneous graph neural networks (HGNNs) with deep learning technological advances.
We propose a novel heterogeneous graph neural network privacy-preserving method based on a differential privacy mechanism named HeteDP.
arXiv Detail & Related papers (2022-10-02T14:41:02Z) - Muffliato: Peer-to-Peer Privacy Amplification for Decentralized Optimization and Averaging [20.39986955578245]
We introduce pairwise network differential privacy, a relaxation of Local Differential Privacy (LDP)
We derive a differentially private decentralized optimization algorithm that alternates between local gradient descent steps and gossip averaging.
Our results show that our algorithms amplify privacy guarantees as a function of the distance between nodes in the graph.
arXiv Detail & Related papers (2022-06-10T13:32:35Z) - GAP: Differentially Private Graph Neural Networks with Aggregation
Perturbation [19.247325210343035]
Graph Neural Networks (GNNs) are powerful models designed for graph data that learn node representation.
Recent studies have shown that GNNs can raise significant privacy concerns when graph data contain sensitive information.
We propose GAP, a novel differentially private GNN that safeguards privacy of nodes and edges.
arXiv Detail & Related papers (2022-03-02T08:58:07Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.