Network Generation with Differential Privacy
- URL: http://arxiv.org/abs/2111.09085v1
- Date: Wed, 17 Nov 2021 13:07:09 GMT
- Title: Network Generation with Differential Privacy
- Authors: Xu Zheng, Nicholas McCarthy and Jer Hayes
- Abstract summary: We consider the problem of generating private synthetic versions of real-world graphs containing private information.
We propose a generative model that can reproduce the properties of real-world networks while maintaining edge-differential privacy.
- Score: 4.297070083645049
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of generating private synthetic versions of
real-world graphs containing private information while maintaining the utility
of generated graphs. Differential privacy is a gold standard for data privacy,
and the introduction of the differentially private stochastic gradient descent
(DP-SGD) algorithm has facilitated the training of private neural models in a
number of domains. Recent advances in graph generation via deep generative
networks have produced several high performing models. We evaluate and compare
state-of-the-art models including adjacency matrix based models and edge based
models, and show a practical implementation that favours the edge-list approach
utilizing the Gaussian noise mechanism when evaluated on commonly used graph
datasets. Based on our findings, we propose a generative model that can
reproduce the properties of real-world networks while maintaining
edge-differential privacy. The proposed model is based on a stochastic neural
network that generates discrete edge-list samples and is trained using the
Wasserstein GAN objective with the DP-SGD optimizer. Being the first approach
to combine these beneficial properties, our model contributes to further
research on graph data privacy.
Related papers
- Differential Privacy Regularization: Protecting Training Data Through Loss Function Regularization [49.1574468325115]
Training machine learning models based on neural networks requires large datasets, which may contain sensitive information.
Differentially private SGD [DP-SGD] requires the modification of the standard gradient descent [SGD] algorithm for training new models.
A novel regularization strategy is proposed to achieve the same goal in a more efficient manner.
arXiv Detail & Related papers (2024-09-25T17:59:32Z) - Local Differential Privacy in Graph Neural Networks: a Reconstruction Approach [17.000441871334683]
We propose a learning framework that can provide node privacy at the user level, while incurring low utility loss.
We focus on a decentralized notion of Differential Privacy, namely Local Differential Privacy.
We develop reconstruction methods to approximate features and labels from perturbed data.
arXiv Detail & Related papers (2023-09-15T17:35:51Z) - Challenging the Myth of Graph Collaborative Filtering: a Reasoned and Reproducibility-driven Analysis [50.972595036856035]
We present a code that successfully replicates results from six popular and recent graph recommendation models.
We compare these graph models with traditional collaborative filtering models that historically performed well in offline evaluations.
By investigating the information flow from users' neighborhoods, we aim to identify which models are influenced by intrinsic features in the dataset structure.
arXiv Detail & Related papers (2023-08-01T09:31:44Z) - Private Gradient Estimation is Useful for Generative Modeling [25.777591229903596]
We present a new private generative modeling approach where samples are generated via Hamiltonian dynamics with gradients of the private dataset estimated by a well-trained network.
Our model is able to generate data with a resolution of 256x256.
arXiv Detail & Related papers (2023-05-18T02:51:17Z) - Privacy-Preserved Neural Graph Similarity Learning [99.78599103903777]
We propose a novel Privacy-Preserving neural Graph Matching network model, named PPGM, for graph similarity learning.
To prevent reconstruction attacks, the proposed model does not communicate node-level representations between devices.
To alleviate the attacks to graph properties, the obfuscated features that contain information from both vectors are communicated.
arXiv Detail & Related papers (2022-10-21T04:38:25Z) - Heterogeneous Graph Neural Network for Privacy-Preserving Recommendation [25.95411320126426]
Social networks are considered to be heterogeneous graph neural networks (HGNNs) with deep learning technological advances.
We propose a novel heterogeneous graph neural network privacy-preserving method based on a differential privacy mechanism named HeteDP.
arXiv Detail & Related papers (2022-10-02T14:41:02Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Differentially Private Graph Classification with GNNs [5.830410490229634]
Graph Networks (GNNs) have established themselves as the state-of-the-art models for many machine learning applications.
We introduce differential privacy for graph-level classification, one of the key applications of machine learning on graphs.
We show results on a variety of synthetic and public datasets and evaluate the impact of different GNN architectures.
arXiv Detail & Related papers (2022-02-05T15:16:40Z) - Don't Generate Me: Training Differentially Private Generative Models
with Sinkhorn Divergence [73.14373832423156]
We propose DP-Sinkhorn, a novel optimal transport-based generative method for learning data distributions from private data with differential privacy.
Unlike existing approaches for training differentially private generative models, we do not rely on adversarial objectives.
arXiv Detail & Related papers (2021-11-01T18:10:21Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - Secure Deep Graph Generation with Link Differential Privacy [32.671503863933616]
We leverage the differential privacy (DP) framework to formulate and enforce rigorous privacy constraints on deep graph generation models.
In particular, we enforce edge-DP by injecting proper noise to the gradients of a link reconstruction-based graph generation model.
Our proposed DPGGAN model is able to generate graphs with effectively preserved global structure and rigorously protected individual link privacy.
arXiv Detail & Related papers (2020-05-01T15:49:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.