Degree-Preserving Randomized Response for Graph Neural Networks under Local Differential Privacy
- URL: http://arxiv.org/abs/2202.10209v6
- Date: Sat, 1 Jun 2024 11:33:12 GMT
- Title: Degree-Preserving Randomized Response for Graph Neural Networks under Local Differential Privacy
- Authors: Seira Hidano, Takao Murakami,
- Abstract summary: We propose a novel LDP algorithm called the DPRR (Degree-Preserving Randomized Response) to provide LDP for edges in GNNs.
Our DPRR preserves each user's degree hence a graph structure while providing edge LDP.
We focus on graph classification as a task of GNNs and evaluate the DPRR using three social graph datasets.
- Score: 8.12606646175019
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Differentially private GNNs (Graph Neural Networks) have been recently studied to provide high accuracy in various tasks on graph data while strongly protecting user privacy. In particular, a recent study proposes an algorithm to protect each user's feature vector in an attributed graph, which includes feature vectors along with node IDs and edges, with LDP (Local Differential Privacy), a strong privacy notion without a trusted third party. However, this algorithm does not protect edges (friendships) in a social graph, hence cannot protect user privacy in unattributed graphs, which include only node IDs and edges. How to provide strong privacy with high accuracy in unattributed graphs remains open. In this paper, we propose a novel LDP algorithm called the DPRR (Degree-Preserving Randomized Response) to provide LDP for edges in GNNs. Our DPRR preserves each user's degree hence a graph structure while providing edge LDP. Technically, our DPRR uses Warner's RR (Randomized Response) and strategic edge sampling, where each user's sampling probability is automatically tuned using the Laplacian mechanism to preserve the degree information under edge LDP. We also propose a privacy budget allocation method to make the noise in both Warner's RR and the Laplacian mechanism small. We focus on graph classification as a task of GNNs and evaluate the DPRR using three social graph datasets. Our experimental results show that the DPRR significantly outperforms three baselines and provides accuracy close to a non-private algorithm in all datasets with a reasonable privacy budget, e.g., epsilon=1. Finally, we introduce data poisoning attacks to our DPRR and a defense against the attacks. We evaluate them using the three social graph datasets and discuss the experimental results.
Related papers
- Blink: Link Local Differential Privacy in Graph Neural Networks via
Bayesian Estimation [79.64626707978418]
We propose using link local differential privacy over decentralized nodes to train graph neural networks.
Our approach spends the privacy budget separately on links and degrees of the graph for the server to better denoise the graph topology.
Our approach outperforms existing methods in terms of accuracy under varying privacy budgets.
arXiv Detail & Related papers (2023-09-06T17:53:31Z) - Differentially Private Decoupled Graph Convolutions for Multigranular
Topology Protection [38.96828804683783]
GNNs can inadvertently expose sensitive user information and interactions through their model predictions.
Applying standard DP approaches to GNNs directly is not advisable due to two main reasons.
We propose a new framework termed Graph Differential Privacy (GDP), specifically tailored to graph learning.
arXiv Detail & Related papers (2023-07-12T19:29:06Z) - Heterogeneous Randomized Response for Differential Privacy in Graph
Neural Networks [18.4005860362025]
Graph neural networks (GNNs) are susceptible to privacy inference attacks (PIAs)
We propose a novel mechanism to protect nodes' features and edges against PIAs under differential privacy (DP) guarantees.
We derive significantly better randomization probabilities and tighter error bounds at both levels of nodes' features and edges.
arXiv Detail & Related papers (2022-11-10T18:52:46Z) - Privacy-Preserved Neural Graph Similarity Learning [99.78599103903777]
We propose a novel Privacy-Preserving neural Graph Matching network model, named PPGM, for graph similarity learning.
To prevent reconstruction attacks, the proposed model does not communicate node-level representations between devices.
To alleviate the attacks to graph properties, the obfuscated features that contain information from both vectors are communicated.
arXiv Detail & Related papers (2022-10-21T04:38:25Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Differentially Private Graph Learning via Sensitivity-Bounded
Personalized PageRank [74.53857412207034]
We propose an algorithm which outputs an approximate PPR and has provably bounded sensitivity to input edges.
Our sensitivity-bounded PPR directly implies private algorithms for several tools of graph learning, such as, differentially private (DP) PPR ranking, DP node classification, and DP node embedding.
arXiv Detail & Related papers (2022-07-14T14:08:59Z) - A Privacy-Preserving Subgraph-Level Federated Graph Neural Network via
Differential Privacy [23.05377582226823]
We propose DP-FedRec, a DP-based federated GNN to solve the non independent and identically distributed (non-IID) data problem.
DP is applied not only on the weights but also on the edges of the intersection graph from PSI to fully protect the privacy of clients.
The evaluation demonstrates DP-FedRec achieves better performance with the graph extension and DP only introduces little computations overhead.
arXiv Detail & Related papers (2022-06-07T08:14:45Z) - GAP: Differentially Private Graph Neural Networks with Aggregation
Perturbation [19.247325210343035]
Graph Neural Networks (GNNs) are powerful models designed for graph data that learn node representation.
Recent studies have shown that GNNs can raise significant privacy concerns when graph data contain sensitive information.
We propose GAP, a novel differentially private GNN that safeguards privacy of nodes and edges.
arXiv Detail & Related papers (2022-03-02T08:58:07Z) - Learning with User-Level Privacy [61.62978104304273]
We analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints.
Rather than guaranteeing only the privacy of individual samples, user-level DP protects a user's entire contribution.
We derive an algorithm that privately answers a sequence of $K$ adaptively chosen queries with privacy cost proportional to $tau$, and apply it to solve the learning tasks we consider.
arXiv Detail & Related papers (2021-02-23T18:25:13Z) - A One-Pass Private Sketch for Most Machine Learning Tasks [48.17461258268463]
Differential privacy (DP) is a compelling privacy definition that explains the privacy-utility tradeoff via formal, provable guarantees.
We propose a private sketch that supports a multitude of machine learning tasks including regression, classification, density estimation, and more.
Our sketch consists of randomized contingency tables that are indexed with locality-sensitive hashing and constructed with an efficient one-pass algorithm.
arXiv Detail & Related papers (2020-06-16T17:47:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.