Preserving Node-level Privacy in Graph Neural Networks
- URL: http://arxiv.org/abs/2311.06888v1
- Date: Sun, 12 Nov 2023 16:21:29 GMT
- Title: Preserving Node-level Privacy in Graph Neural Networks
- Authors: Zihang Xiang, Tianhao Wang, Di Wang
- Abstract summary: We propose a solution that addresses the issue of node-level privacy in Graph Neural Networks (GNNs)
Our protocol consists of two main components: 1) a sampling routine called HeterPoisson, which employs a specialized node sampling strategy and a series of tailored operations to generate a batch of sub-graphs with desired properties, and 2) a randomization routine that utilizes symmetric Laplace noise instead of the commonly used Gaussian noise.
Our protocol enables GNN learning with good performance, as demonstrated by experiments on five real-world datasets.
- Score: 8.823710998526705
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Differential privacy (DP) has seen immense applications in learning on
tabular, image, and sequential data where instance-level privacy is concerned.
In learning on graphs, contrastingly, works on node-level privacy are highly
sparse. Challenges arise as existing DP protocols hardly apply to the
message-passing mechanism in Graph Neural Networks (GNNs).
In this study, we propose a solution that specifically addresses the issue of
node-level privacy. Our protocol consists of two main components: 1) a sampling
routine called HeterPoisson, which employs a specialized node sampling strategy
and a series of tailored operations to generate a batch of sub-graphs with
desired properties, and 2) a randomization routine that utilizes symmetric
multivariate Laplace (SML) noise instead of the commonly used Gaussian noise.
Our privacy accounting shows this particular combination provides a non-trivial
privacy guarantee. In addition, our protocol enables GNN learning with good
performance, as demonstrated by experiments on five real-world datasets;
compared with existing baselines, our method shows significant advantages,
especially in the high privacy regime. Experimentally, we also 1) perform
membership inference attacks against our protocol and 2) apply privacy audit
techniques to confirm our protocol's privacy integrity.
In the sequel, we present a study on a seemingly appealing approach
\cite{sajadmanesh2023gap} (USENIX'23) that protects node-level privacy via
differentially private node/instance embeddings. Unfortunately, such work has
fundamental privacy flaws, which are identified through a thorough case study.
More importantly, we prove an impossibility result of achieving both (strong)
privacy and (acceptable) utility through private instance embedding. The
implication is that such an approach has intrinsic utility barriers when
enforcing differential privacy.
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Privacy-Preserving Graph Embedding based on Local Differential Privacy [26.164722283887333]
We introduce a novel privacy-preserving graph embedding framework, named PrivGE, to protect node data privacy.
Specifically, we propose an LDP mechanism to obfuscate node data and utilize personalized PageRank as the proximity measure to learn node representations.
Experiments on several real-world graph datasets demonstrate that PrivGE achieves an optimal balance between privacy and utility.
arXiv Detail & Related papers (2023-10-17T08:06:08Z) - Blink: Link Local Differential Privacy in Graph Neural Networks via
Bayesian Estimation [79.64626707978418]
We propose using link local differential privacy over decentralized nodes to train graph neural networks.
Our approach spends the privacy budget separately on links and degrees of the graph for the server to better denoise the graph topology.
Our approach outperforms existing methods in terms of accuracy under varying privacy budgets.
arXiv Detail & Related papers (2023-09-06T17:53:31Z) - Differentially Private Graph Neural Network with Importance-Grained
Noise Adaption [6.319864669924721]
Graph Neural Networks (GNNs) with differential privacy have been proposed to preserve graph privacy when nodes represent personal and sensitive information.
We study the problem of importance-grained privacy, where nodes contain personal data that need to be kept private but are critical for training a GNN.
We propose NAP-GNN, a node-grained privacy-preserving GNN algorithm with privacy guarantees based on adaptive differential privacy to safeguard node information.
arXiv Detail & Related papers (2023-08-09T13:18:41Z) - On Differential Privacy and Adaptive Data Analysis with Bounded Space [76.10334958368618]
We study the space complexity of the two related fields of differential privacy and adaptive data analysis.
We show that there exists a problem P that requires exponentially more space to be solved efficiently with differential privacy.
The line of work on adaptive data analysis focuses on understanding the number of samples needed for answering a sequence of adaptive queries.
arXiv Detail & Related papers (2023-02-11T14:45:31Z) - Privacy-Preserved Neural Graph Similarity Learning [99.78599103903777]
We propose a novel Privacy-Preserving neural Graph Matching network model, named PPGM, for graph similarity learning.
To prevent reconstruction attacks, the proposed model does not communicate node-level representations between devices.
To alleviate the attacks to graph properties, the obfuscated features that contain information from both vectors are communicated.
arXiv Detail & Related papers (2022-10-21T04:38:25Z) - Smooth Anonymity for Sparse Graphs [69.1048938123063]
differential privacy has emerged as the gold standard of privacy, however, when it comes to sharing sparse datasets.
In this work, we consider a variation of $k$-anonymity, which we call smooth-$k$-anonymity, and design simple large-scale algorithms that efficiently provide smooth-$k$-anonymity.
arXiv Detail & Related papers (2022-07-13T17:09:25Z) - Muffliato: Peer-to-Peer Privacy Amplification for Decentralized Optimization and Averaging [20.39986955578245]
We introduce pairwise network differential privacy, a relaxation of Local Differential Privacy (LDP)
We derive a differentially private decentralized optimization algorithm that alternates between local gradient descent steps and gossip averaging.
Our results show that our algorithms amplify privacy guarantees as a function of the distance between nodes in the graph.
arXiv Detail & Related papers (2022-06-10T13:32:35Z) - Individual Privacy Accounting for Differentially Private Stochastic Gradient Descent [69.14164921515949]
We characterize privacy guarantees for individual examples when releasing models trained by DP-SGD.
We find that most examples enjoy stronger privacy guarantees than the worst-case bound.
This implies groups that are underserved in terms of model utility simultaneously experience weaker privacy guarantees.
arXiv Detail & Related papers (2022-06-06T13:49:37Z) - GAP: Differentially Private Graph Neural Networks with Aggregation
Perturbation [19.247325210343035]
Graph Neural Networks (GNNs) are powerful models designed for graph data that learn node representation.
Recent studies have shown that GNNs can raise significant privacy concerns when graph data contain sensitive information.
We propose GAP, a novel differentially private GNN that safeguards privacy of nodes and edges.
arXiv Detail & Related papers (2022-03-02T08:58:07Z) - Node-Level Differentially Private Graph Neural Networks [14.917945355629563]
Graph Neural Networks (GNNs) are a popular technique for modelling graph-structured data.
This work formally defines the problem of learning 1-layer GNNs with node-level privacy.
We provide an algorithmic solution with a strong differential privacy guarantee.
arXiv Detail & Related papers (2021-11-23T16:18:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.