Secure Deep Graph Generation with Link Differential Privacy
- URL: http://arxiv.org/abs/2005.00455v3
- Date: Sat, 1 May 2021 03:50:35 GMT
- Title: Secure Deep Graph Generation with Link Differential Privacy
- Authors: Carl Yang, Haonan Wang, Ke Zhang, Liang Chen, Lichao Sun
- Abstract summary: We leverage the differential privacy (DP) framework to formulate and enforce rigorous privacy constraints on deep graph generation models.
In particular, we enforce edge-DP by injecting proper noise to the gradients of a link reconstruction-based graph generation model.
Our proposed DPGGAN model is able to generate graphs with effectively preserved global structure and rigorously protected individual link privacy.
- Score: 32.671503863933616
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many data mining and analytical tasks rely on the abstraction of networks
(graphs) to summarize relational structures among individuals (nodes). Since
relational data are often sensitive, we aim to seek effective approaches to
generate utility-preserved yet privacy-protected structured data. In this
paper, we leverage the differential privacy (DP) framework to formulate and
enforce rigorous privacy constraints on deep graph generation models, with a
focus on edge-DP to guarantee individual link privacy. In particular, we
enforce edge-DP by injecting proper noise to the gradients of a link
reconstruction-based graph generation model, while ensuring data utility by
improving structure learning with structure-oriented graph discrimination.
Extensive experiments on two real-world network datasets show that our proposed
DPGGAN model is able to generate graphs with effectively preserved global
structure and rigorously protected individual link privacy.
Related papers
- Unveiling Privacy Vulnerabilities: Investigating the Role of Structure in Graph Data [17.11821761700748]
This study advances the understanding and protection against privacy risks emanating from network structure.
We develop a novel graph private attribute inference attack, which acts as a pivotal tool for evaluating the potential for privacy leakage through network structures.
Our attack model poses a significant threat to user privacy, and our graph data publishing method successfully achieves the optimal privacy-utility trade-off.
arXiv Detail & Related papers (2024-07-26T07:40:54Z) - Robust Utility-Preserving Text Anonymization Based on Large Language Models [80.5266278002083]
Text anonymization is crucial for sharing sensitive data while maintaining privacy.
Existing techniques face the emerging challenges of re-identification attack ability of Large Language Models.
This paper proposes a framework composed of three LLM-based components -- a privacy evaluator, a utility evaluator, and an optimization component.
arXiv Detail & Related papers (2024-07-16T14:28:56Z) - Federated Learning Empowered by Generative Content [55.576885852501775]
Federated learning (FL) enables leveraging distributed private data for model training in a privacy-preserving way.
We propose a novel FL framework termed FedGC, designed to mitigate data heterogeneity issues by diversifying private data with generative content.
We conduct a systematic empirical study on FedGC, covering diverse baselines, datasets, scenarios, and modalities.
arXiv Detail & Related papers (2023-12-10T07:38:56Z) - Privacy-preserving design of graph neural networks with applications to
vertical federated learning [56.74455367682945]
We present an end-to-end graph representation learning framework called VESPER.
VESPER is capable of training high-performance GNN models over both sparse and dense graphs under reasonable privacy budgets.
arXiv Detail & Related papers (2023-10-31T15:34:59Z) - Privacy-Preserving Graph Embedding based on Local Differential Privacy [26.164722283887333]
We introduce a novel privacy-preserving graph embedding framework, named PrivGE, to protect node data privacy.
Specifically, we propose an LDP mechanism to obfuscate node data and utilize personalized PageRank as the proximity measure to learn node representations.
Experiments on several real-world graph datasets demonstrate that PrivGE achieves an optimal balance between privacy and utility.
arXiv Detail & Related papers (2023-10-17T08:06:08Z) - Independent Distribution Regularization for Private Graph Embedding [55.24441467292359]
Graph embeddings are susceptible to attribute inference attacks, which allow attackers to infer private node attributes from the learned graph embeddings.
To address these concerns, privacy-preserving graph embedding methods have emerged.
We propose a novel approach called Private Variational Graph AutoEncoders (PVGAE) with the aid of independent distribution penalty as a regularization term.
arXiv Detail & Related papers (2023-08-16T13:32:43Z) - Heterogeneous Graph Neural Network for Privacy-Preserving Recommendation [25.95411320126426]
Social networks are considered to be heterogeneous graph neural networks (HGNNs) with deep learning technological advances.
We propose a novel heterogeneous graph neural network privacy-preserving method based on a differential privacy mechanism named HeteDP.
arXiv Detail & Related papers (2022-10-02T14:41:02Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Gromov-Wasserstein Discrepancy with Local Differential Privacy for
Distributed Structural Graphs [7.4398547397969494]
We propose a privacy-preserving framework to analyze the GW discrepancy of node embedding learned locally from graph neural networks.
Our experiments show that, with strong privacy protections guaranteed by the $varilon$-LDP algorithm, the proposed framework not only preserves privacy in graph learning but also presents a noised structural metric under GW distance.
arXiv Detail & Related papers (2022-02-01T23:32:33Z) - Network Generation with Differential Privacy [4.297070083645049]
We consider the problem of generating private synthetic versions of real-world graphs containing private information.
We propose a generative model that can reproduce the properties of real-world networks while maintaining edge-differential privacy.
arXiv Detail & Related papers (2021-11-17T13:07:09Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.