PrivDPR: Synthetic Graph Publishing with Deep PageRank under Differential Privacy
- URL: http://arxiv.org/abs/2501.02354v1
- Date: Sat, 04 Jan 2025 18:19:21 GMT
- Title: PrivDPR: Synthetic Graph Publishing with Deep PageRank under Differential Privacy
- Authors: Sen Zhang, Haibo Hu, Qingqing Ye, Jianliang Xu,
- Abstract summary: We design PrivDPR, a novel privacy-preserving deep PageRank for graph synthesis.
In particular, we achieve DP by adding noise to the gradient for a specific weight during learning.
We prove that the synthetic graph generated by PrivDPR satisfies node-level differential privacy.
- Score: 21.78459506259644
- License:
- Abstract: The objective of privacy-preserving synthetic graph publishing is to safeguard individuals' privacy while retaining the utility of original data. Most existing methods focus on graph neural networks under differential privacy (DP), and yet two fundamental problems in generating synthetic graphs remain open. First, the current research often encounters high sensitivity due to the intricate relationships between nodes in a graph. Second, DP is usually achieved through advanced composition mechanisms that tend to converge prematurely when working with a small privacy budget. In this paper, inspired by the simplicity, effectiveness, and ease of analysis of PageRank, we design PrivDPR, a novel privacy-preserving deep PageRank for graph synthesis. In particular, we achieve DP by adding noise to the gradient for a specific weight during learning. Utilizing weight normalization as a bridge, we theoretically reveal that increasing the number of layers in PrivDPR can effectively mitigate the high sensitivity and privacy budget splitting. Through formal privacy analysis, we prove that the synthetic graph generated by PrivDPR satisfies node-level DP. Experiments on real-world graph datasets show that PrivDPR preserves high data utility across multiple graph structural properties.
Related papers
- Differentially Private Graph Diffusion with Applications in Personalized PageRanks [15.529891375174165]
This work proposes a novel graph diffusion framework with edge-level differential privacy guarantees by using noisy diffusion iterates.
The algorithm injects Laplace noise per diffusion and adopts a degree-based thresholding function to mitigate the high sensitivity induced by low-degree nodes.
Our privacy loss analysis is based on Privacy Amplification by Iteration (PABI), which to our best knowledge, is the first effort that analyzes PABI with Laplace noise.
arXiv Detail & Related papers (2024-06-22T15:32:53Z) - Privacy-Preserving Graph Embedding based on Local Differential Privacy [26.164722283887333]
We introduce a novel privacy-preserving graph embedding framework, named PrivGE, to protect node data privacy.
Specifically, we propose an LDP mechanism to obfuscate node data and utilize personalized PageRank as the proximity measure to learn node representations.
Experiments on several real-world graph datasets demonstrate that PrivGE achieves an optimal balance between privacy and utility.
arXiv Detail & Related papers (2023-10-17T08:06:08Z) - Independent Distribution Regularization for Private Graph Embedding [55.24441467292359]
Graph embeddings are susceptible to attribute inference attacks, which allow attackers to infer private node attributes from the learned graph embeddings.
To address these concerns, privacy-preserving graph embedding methods have emerged.
We propose a novel approach called Private Variational Graph AutoEncoders (PVGAE) with the aid of independent distribution penalty as a regularization term.
arXiv Detail & Related papers (2023-08-16T13:32:43Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Privacy-Preserved Neural Graph Similarity Learning [99.78599103903777]
We propose a novel Privacy-Preserving neural Graph Matching network model, named PPGM, for graph similarity learning.
To prevent reconstruction attacks, the proposed model does not communicate node-level representations between devices.
To alleviate the attacks to graph properties, the obfuscated features that contain information from both vectors are communicated.
arXiv Detail & Related papers (2022-10-21T04:38:25Z) - Heterogeneous Graph Neural Network for Privacy-Preserving Recommendation [25.95411320126426]
Social networks are considered to be heterogeneous graph neural networks (HGNNs) with deep learning technological advances.
We propose a novel heterogeneous graph neural network privacy-preserving method based on a differential privacy mechanism named HeteDP.
arXiv Detail & Related papers (2022-10-02T14:41:02Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - Differentially Private Graph Learning via Sensitivity-Bounded
Personalized PageRank [74.53857412207034]
We propose an algorithm which outputs an approximate PPR and has provably bounded sensitivity to input edges.
Our sensitivity-bounded PPR directly implies private algorithms for several tools of graph learning, such as, differentially private (DP) PPR ranking, DP node classification, and DP node embedding.
arXiv Detail & Related papers (2022-07-14T14:08:59Z) - GAP: Differentially Private Graph Neural Networks with Aggregation
Perturbation [19.247325210343035]
Graph Neural Networks (GNNs) are powerful models designed for graph data that learn node representation.
Recent studies have shown that GNNs can raise significant privacy concerns when graph data contain sensitive information.
We propose GAP, a novel differentially private GNN that safeguards privacy of nodes and edges.
arXiv Detail & Related papers (2022-03-02T08:58:07Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z) - Secure Deep Graph Generation with Link Differential Privacy [32.671503863933616]
We leverage the differential privacy (DP) framework to formulate and enforce rigorous privacy constraints on deep graph generation models.
In particular, we enforce edge-DP by injecting proper noise to the gradients of a link reconstruction-based graph generation model.
Our proposed DPGGAN model is able to generate graphs with effectively preserved global structure and rigorously protected individual link privacy.
arXiv Detail & Related papers (2020-05-01T15:49:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.