GRAND: Graph Release with Assured Node Differential Privacy
- URL: http://arxiv.org/abs/2507.00402v2
- Date: Thu, 07 Aug 2025 15:06:17 GMT
- Title: GRAND: Graph Release with Assured Node Differential Privacy
- Authors: Suqing Liu, Xuan Bi, Tianxi Li,
- Abstract summary: We propose GRAND, which is to the best of our knowledge, the first network release mechanism that releases entire networks while ensuring node-level differential privacy and preserving structural properties.<n>The effectiveness of the approach is evaluated through extensive experiments on both synthetic and real-world datasets.
- Score: 3.7346004746366384
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Differential privacy is a well-established framework for safeguarding sensitive information in data. While extensively applied across various domains, its application to network data -- particularly at the node level -- remains underexplored. Existing methods for node-level privacy either focus exclusively on query-based approaches, which restrict output to pre-specified network statistics, or fail to preserve key structural properties of the network. In this work, we propose GRAND (Graph Release with Assured Node Differential privacy), which is, to the best of our knowledge, the first network release mechanism that releases entire networks while ensuring node-level differential privacy and preserving structural properties. Under a broad class of latent space models, we show that the released network asymptotically follows the same distribution as the original network. The effectiveness of the approach is evaluated through extensive experiments on both synthetic and real-world datasets.
Related papers
- Efficient and Privacy-Preserved Link Prediction via Condensed Graphs [49.898152180805454]
We introduce HyDROtextsuperscript+, a graph condensation method guided by algebraic Jaccard similarity.<n>Our method achieves nearly 20* faster training and reduces storage requirements by 452*, compared to link prediction on the original networks.
arXiv Detail & Related papers (2025-03-15T14:54:04Z) - Consistent community detection in multi-layer networks with heterogeneous differential privacy [4.451479907610764]
We propose a personalized edge flipping mechanism that allows data publishers to protect edge information based on each node's privacy preference.
It can achieve differential privacy while preserving the community structure under the multi-layer degree-corrected block model.
We show that better privacy protection of edges can be obtained for a proportion of nodes while allowing other nodes to give up their privacy.
arXiv Detail & Related papers (2024-06-20T22:49:55Z) - Multi-perspective Memory Enhanced Network for Identifying Key Nodes in Social Networks [51.54002032659713]
We propose a novel Multi-perspective Memory Enhanced Network (MMEN) for identifying key nodes in social networks.
MMEN mines key nodes from multiple perspectives and utilizes memory networks to store historical information.
Our method significantly outperforms previous methods.
arXiv Detail & Related papers (2024-03-22T14:29:03Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - Independent Distribution Regularization for Private Graph Embedding [55.24441467292359]
Graph embeddings are susceptible to attribute inference attacks, which allow attackers to infer private node attributes from the learned graph embeddings.
To address these concerns, privacy-preserving graph embedding methods have emerged.
We propose a novel approach called Private Variational Graph AutoEncoders (PVGAE) with the aid of independent distribution penalty as a regularization term.
arXiv Detail & Related papers (2023-08-16T13:32:43Z) - Differentially Private Graph Neural Network with Importance-Grained
Noise Adaption [6.319864669924721]
Graph Neural Networks (GNNs) with differential privacy have been proposed to preserve graph privacy when nodes represent personal and sensitive information.
We study the problem of importance-grained privacy, where nodes contain personal data that need to be kept private but are critical for training a GNN.
We propose NAP-GNN, a node-grained privacy-preserving GNN algorithm with privacy guarantees based on adaptive differential privacy to safeguard node information.
arXiv Detail & Related papers (2023-08-09T13:18:41Z) - Heterogeneous Graph Neural Network for Privacy-Preserving Recommendation [25.95411320126426]
Social networks are considered to be heterogeneous graph neural networks (HGNNs) with deep learning technological advances.
We propose a novel heterogeneous graph neural network privacy-preserving method based on a differential privacy mechanism named HeteDP.
arXiv Detail & Related papers (2022-10-02T14:41:02Z) - Cross-Network Social User Embedding with Hybrid Differential Privacy
Guarantees [81.6471440778355]
We propose a Cross-network Social User Embedding framework, namely DP-CroSUE, to learn the comprehensive representations of users in a privacy-preserving way.
In particular, for each heterogeneous social network, we first introduce a hybrid differential privacy notion to capture the variation of privacy expectations for heterogeneous data types.
To further enhance user embeddings, a novel cross-network GCN embedding model is designed to transfer knowledge across networks through those aligned users.
arXiv Detail & Related papers (2022-09-04T06:22:37Z) - GAP: Differentially Private Graph Neural Networks with Aggregation
Perturbation [19.247325210343035]
Graph Neural Networks (GNNs) are powerful models designed for graph data that learn node representation.
Recent studies have shown that GNNs can raise significant privacy concerns when graph data contain sensitive information.
We propose GAP, a novel differentially private GNN that safeguards privacy of nodes and edges.
arXiv Detail & Related papers (2022-03-02T08:58:07Z) - Gromov-Wasserstein Discrepancy with Local Differential Privacy for
Distributed Structural Graphs [7.4398547397969494]
We propose a privacy-preserving framework to analyze the GW discrepancy of node embedding learned locally from graph neural networks.
Our experiments show that, with strong privacy protections guaranteed by the $varilon$-LDP algorithm, the proposed framework not only preserves privacy in graph learning but also presents a noised structural metric under GW distance.
arXiv Detail & Related papers (2022-02-01T23:32:33Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.