Privacy-Utility Trades in Crowdsourced Signal Map Obfuscation
- URL: http://arxiv.org/abs/2201.04782v1
- Date: Thu, 13 Jan 2022 03:46:22 GMT
- Title: Privacy-Utility Trades in Crowdsourced Signal Map Obfuscation
- Authors: Jiang Zhang, Lillian Clark, Matthew Clark, Konstantinos Psounis, Peter
Kairouz
- Abstract summary: Crowdsource celluar signal strength measurements can be used to generate signal maps to improve network performance.
We consider obfuscating such data before the data leaves the mobile device.
Our evaluation results, based on multiple, diverse, real-world signal map datasets, demonstrate the feasibility of concurrently achieving adequate privacy and utility.
- Score: 20.58763760239068
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cellular providers and data aggregating companies crowdsource celluar signal
strength measurements from user devices to generate signal maps, which can be
used to improve network performance. Recognizing that this data collection may
be at odds with growing awareness of privacy concerns, we consider obfuscating
such data before the data leaves the mobile device. The goal is to increase
privacy such that it is difficult to recover sensitive features from the
obfuscated data (e.g. user ids and user whereabouts), while still allowing
network providers to use the data for improving network services (i.e. create
accurate signal maps). To examine this privacy-utility tradeoff, we identify
privacy and utility metrics and threat models suited to signal strength
measurements. We then obfuscate the measurements using several preeminent
techniques, spanning differential privacy, generative adversarial privacy, and
information-theoretic privacy techniques, in order to benchmark a variety of
promising obfuscation approaches and provide guidance to real-world engineers
who are tasked to build signal maps that protect privacy without hurting
utility. Our evaluation results, based on multiple, diverse, real-world signal
map datasets, demonstrate the feasibility of concurrently achieving adequate
privacy and utility, with obfuscation strategies which use the structure and
intended use of datasets in their design, and target average-case, rather than
worst-case, guarantees.
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Unveiling Privacy Vulnerabilities: Investigating the Role of Structure in Graph Data [17.11821761700748]
This study advances the understanding and protection against privacy risks emanating from network structure.
We develop a novel graph private attribute inference attack, which acts as a pivotal tool for evaluating the potential for privacy leakage through network structures.
Our attack model poses a significant threat to user privacy, and our graph data publishing method successfully achieves the optimal privacy-utility trade-off.
arXiv Detail & Related papers (2024-07-26T07:40:54Z) - Defining 'Good': Evaluation Framework for Synthetic Smart Meter Data [14.779917834583577]
We show that standard privacy attack methods are inadequate for assessing privacy risks of smart meter datasets.
We propose an improved method by injecting training data with implausible outliers, then launching privacy attacks directly on these outliers.
arXiv Detail & Related papers (2024-07-16T14:41:27Z) - Ungeneralizable Examples [70.76487163068109]
Current approaches to creating unlearnable data involve incorporating small, specially designed noises.
We extend the concept of unlearnable data to conditional data learnability and introduce textbfUntextbfGeneralizable textbfExamples (UGEs)
UGEs exhibit learnability for authorized users while maintaining unlearnability for potential hackers.
arXiv Detail & Related papers (2024-04-22T09:29:14Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - ConfounderGAN: Protecting Image Data Privacy with Causal Confounder [85.6757153033139]
We propose ConfounderGAN, a generative adversarial network (GAN) that can make personal image data unlearnable to protect the data privacy of its owners.
Experiments are conducted in six image classification datasets, consisting of three natural object datasets and three medical datasets.
arXiv Detail & Related papers (2022-12-04T08:49:14Z) - Cross-Network Social User Embedding with Hybrid Differential Privacy
Guarantees [81.6471440778355]
We propose a Cross-network Social User Embedding framework, namely DP-CroSUE, to learn the comprehensive representations of users in a privacy-preserving way.
In particular, for each heterogeneous social network, we first introduce a hybrid differential privacy notion to capture the variation of privacy expectations for heterogeneous data types.
To further enhance user embeddings, a novel cross-network GCN embedding model is designed to transfer knowledge across networks through those aligned users.
arXiv Detail & Related papers (2022-09-04T06:22:37Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Privacy-Preserving Synthetic Smart Meters Data [24.29870358870313]
We propose a method to generate synthetic power consumption samples that faithfully imitate the originals.
Our method is based on Generative Adversarial Networks (GANs)
We study the privacy guarantees provided to members of the training set of our neural network.
arXiv Detail & Related papers (2020-12-06T16:11:16Z) - Deep Directed Information-Based Learning for Privacy-Preserving Smart
Meter Data Release [30.409342804445306]
We study the problem in the context of time series data and smart meters (SMs) power consumption measurements.
We introduce the Directed Information (DI) as a more meaningful measure of privacy in the considered setting.
Our empirical studies on real-world data sets from SMs measurements in the worst-case scenario show the existing trade-offs between privacy and utility.
arXiv Detail & Related papers (2020-11-20T13:41:11Z) - Privacy Adversarial Network: Representation Learning for Mobile Data
Privacy [33.75500773909694]
A growing number of cloud-based intelligent services for mobile users require user data to be sent to the provider.
Prior works either obfuscate the data, e.g. add noise and remove identity information, or send representations extracted from the data, e.g. anonymized features.
This work departs from prior works in methodology: we leverage adversarial learning to a better balance between privacy and utility.
arXiv Detail & Related papers (2020-06-08T09:42:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.