Large-Scale Privacy-Preserving Network Embedding against Private Link
Inference Attacks
- URL: http://arxiv.org/abs/2205.14440v1
- Date: Sat, 28 May 2022 13:59:39 GMT
- Title: Large-Scale Privacy-Preserving Network Embedding against Private Link
Inference Attacks
- Authors: Xiao Han, Leye Wang, Junjie Wu, Yuncong Yang
- Abstract summary: We address a novel problem of privacy-preserving network embedding against private link inference attacks.
We propose to perturb the original network by adding or removing links, and expect the embedding generated on the perturbed network can leak little information about private links but hold high utility for various downstream tasks.
- Score: 12.434976161956401
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Network embedding represents network nodes by a low-dimensional informative
vector. While it is generally effective for various downstream tasks, it may
leak some private information of networks, such as hidden private links. In
this work, we address a novel problem of privacy-preserving network embedding
against private link inference attacks. Basically, we propose to perturb the
original network by adding or removing links, and expect the embedding
generated on the perturbed network can leak little information about private
links but hold high utility for various downstream tasks. Towards this goal, we
first propose general measurements to quantify privacy gain and utility loss
incurred by candidate network perturbations; we then design a PPNE framework to
identify the optimal perturbation solution with the best privacy-utility
trade-off in an iterative way. Furthermore, we propose many techniques to
accelerate PPNE and ensure its scalability. For instance, as the skip-gram
embedding methods including DeepWalk and LINE can be seen as matrix
factorization with closed form embedding results, we devise efficient privacy
gain and utility loss approximation methods to avoid the repetitive
time-consuming embedding training for every candidate network perturbation in
each iteration. Experiments on real-life network datasets (with up to millions
of nodes) verify that PPNE outperforms baselines by sacrificing less utility
and obtaining higher privacy protection.
Related papers
- Differentially Private Data Release on Graphs: Inefficiencies and Unfairness [48.96399034594329]
This paper characterizes the impact of Differential Privacy on bias and unfairness in the context of releasing information about networks.
We consider a network release problem where the network structure is known to all, but the weights on edges must be released privately.
Our work provides theoretical foundations and empirical evidence into the bias and unfairness arising due to privacy in these networked decision problems.
arXiv Detail & Related papers (2024-08-08T08:37:37Z) - Unveiling Privacy Vulnerabilities: Investigating the Role of Structure in Graph Data [17.11821761700748]
This study advances the understanding and protection against privacy risks emanating from network structure.
We develop a novel graph private attribute inference attack, which acts as a pivotal tool for evaluating the potential for privacy leakage through network structures.
Our attack model poses a significant threat to user privacy, and our graph data publishing method successfully achieves the optimal privacy-utility trade-off.
arXiv Detail & Related papers (2024-07-26T07:40:54Z) - Consistent community detection in multi-layer networks with heterogeneous differential privacy [4.451479907610764]
We propose a personalized edge flipping mechanism that allows data publishers to protect edge information based on each node's privacy preference.
It can achieve differential privacy while preserving the community structure under the multi-layer degree-corrected block model.
We show that better privacy protection of edges can be obtained for a proportion of nodes while allowing other nodes to give up their privacy.
arXiv Detail & Related papers (2024-06-20T22:49:55Z) - Preserving Node-level Privacy in Graph Neural Networks [8.823710998526705]
We propose a solution that addresses the issue of node-level privacy in Graph Neural Networks (GNNs)
Our protocol consists of two main components: 1) a sampling routine called HeterPoisson, which employs a specialized node sampling strategy and a series of tailored operations to generate a batch of sub-graphs with desired properties, and 2) a randomization routine that utilizes symmetric Laplace noise instead of the commonly used Gaussian noise.
Our protocol enables GNN learning with good performance, as demonstrated by experiments on five real-world datasets.
arXiv Detail & Related papers (2023-11-12T16:21:29Z) - Blink: Link Local Differential Privacy in Graph Neural Networks via
Bayesian Estimation [79.64626707978418]
We propose using link local differential privacy over decentralized nodes to train graph neural networks.
Our approach spends the privacy budget separately on links and degrees of the graph for the server to better denoise the graph topology.
Our approach outperforms existing methods in terms of accuracy under varying privacy budgets.
arXiv Detail & Related papers (2023-09-06T17:53:31Z) - Differentially Private Graph Neural Network with Importance-Grained
Noise Adaption [6.319864669924721]
Graph Neural Networks (GNNs) with differential privacy have been proposed to preserve graph privacy when nodes represent personal and sensitive information.
We study the problem of importance-grained privacy, where nodes contain personal data that need to be kept private but are critical for training a GNN.
We propose NAP-GNN, a node-grained privacy-preserving GNN algorithm with privacy guarantees based on adaptive differential privacy to safeguard node information.
arXiv Detail & Related papers (2023-08-09T13:18:41Z) - Over-the-Air Federated Learning with Privacy Protection via Correlated
Additive Perturbations [57.20885629270732]
We consider privacy aspects of wireless federated learning with Over-the-Air (OtA) transmission of gradient updates from multiple users/agents to an edge server.
Traditional perturbation-based methods provide privacy protection while sacrificing the training accuracy.
In this work, we aim at minimizing privacy leakage to the adversary and the degradation of model accuracy at the edge server.
arXiv Detail & Related papers (2022-10-05T13:13:35Z) - Cross-Network Social User Embedding with Hybrid Differential Privacy
Guarantees [81.6471440778355]
We propose a Cross-network Social User Embedding framework, namely DP-CroSUE, to learn the comprehensive representations of users in a privacy-preserving way.
In particular, for each heterogeneous social network, we first introduce a hybrid differential privacy notion to capture the variation of privacy expectations for heterogeneous data types.
To further enhance user embeddings, a novel cross-network GCN embedding model is designed to transfer knowledge across networks through those aligned users.
arXiv Detail & Related papers (2022-09-04T06:22:37Z) - Sphynx: ReLU-Efficient Network Design for Private Inference [49.73927340643812]
We focus on private inference (PI), where the goal is to perform inference on a user's data sample using a service provider's model.
Existing PI methods for deep networks enable cryptographically secure inference with little drop in functionality.
This paper presents Sphynx, a ReLU-efficient network design method based on micro-search strategies for convolutional cell design.
arXiv Detail & Related papers (2021-06-17T18:11:10Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.