PGLP: Customizable and Rigorous Location Privacy through Policy Graph
- URL: http://arxiv.org/abs/2005.01263v2
- Date: Wed, 15 Jul 2020 15:16:37 GMT
- Title: PGLP: Customizable and Rigorous Location Privacy through Policy Graph
- Authors: Yang Cao, Yonghui Xiao, Shun Takagi, Li Xiong, Masatoshi Yoshikawa,
Yilin Shen, Jinfei Liu, Hongxia Jin, and Xiaofeng Xu
- Abstract summary: We propose a new location privacy notion called PGLP, which provides a rich interface to release private locations with customizable and rigorous privacy guarantee.
Specifically, we formalize a user's location privacy requirements using a textitlocation policy graph, which is expressive and customizable.
Third, we design a private location trace release framework that pipelines the detection of location exposure, policy graph repair, and private trajectory release with customizable and rigorous location privacy.
- Score: 68.3736286350014
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Location privacy has been extensively studied in the literature. However,
existing location privacy models are either not rigorous or not customizable,
which limits the trade-off between privacy and utility in many real-world
applications. To address this issue, we propose a new location privacy notion
called PGLP, i.e., \textit{Policy Graph based Location Privacy}, providing a
rich interface to release private locations with customizable and rigorous
privacy guarantee. First, we design the privacy metrics of PGLP by extending
differential privacy. Specifically, we formalize a user's location privacy
requirements using a \textit{location policy graph}, which is expressive and
customizable. Second, we investigate how to satisfy an arbitrarily given
location policy graph under adversarial knowledge. We find that a location
policy graph may not always be viable and may suffer \textit{location exposure}
when the attacker knows the user's mobility pattern. We propose efficient
methods to detect location exposure and repair the policy graph with optimal
utility. Third, we design a private location trace release framework that
pipelines the detection of location exposure, policy graph repair, and private
trajectory release with customizable and rigorous location privacy. Finally, we
conduct experiments on real-world datasets to verify the effectiveness of the
privacy-utility trade-off and the efficiency of the proposed algorithms.
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action [54.11479432110771]
PrivacyLens is a novel framework designed to extend privacy-sensitive seeds into expressive vignettes and further into agent trajectories.
We instantiate PrivacyLens with a collection of privacy norms grounded in privacy literature and crowdsourced seeds.
State-of-the-art LMs, like GPT-4 and Llama-3-70B, leak sensitive information in 25.68% and 38.69% of cases, even when prompted with privacy-enhancing instructions.
arXiv Detail & Related papers (2024-08-29T17:58:38Z) - Measuring Privacy Loss in Distributed Spatio-Temporal Data [26.891854386652266]
We propose an alternative privacy loss against location reconstruction attacks by an informed adversary.
Our experiments on real and synthetic data demonstrate that our privacy loss better reflects our intuitions on individual privacy violation in the distributed setting.
arXiv Detail & Related papers (2024-02-18T09:53:14Z) - Protecting Personalized Trajectory with Differential Privacy under Temporal Correlations [37.88484505367802]
This paper proposes a personalized trajectory privacy protection mechanism (PTPPM)
We identify a protection location set (PLS) for each location by employing the Hilbert curve-based minimum distance search algorithm.
We put forth a novel Permute-and-Flip mechanism for location perturbation, which maps its initial application in data publishing privacy protection to a location perturbation mechanism.
arXiv Detail & Related papers (2024-01-20T12:59:08Z) - Privacy-Preserving Graph Embedding based on Local Differential Privacy [26.164722283887333]
We introduce a novel privacy-preserving graph embedding framework, named PrivGE, to protect node data privacy.
Specifically, we propose an LDP mechanism to obfuscate node data and utilize personalized PageRank as the proximity measure to learn node representations.
Experiments on several real-world graph datasets demonstrate that PrivGE achieves an optimal balance between privacy and utility.
arXiv Detail & Related papers (2023-10-17T08:06:08Z) - Echo of Neighbors: Privacy Amplification for Personalized Private
Federated Learning with Shuffle Model [21.077469463027306]
Federated Learning, as a popular paradigm for collaborative training, is vulnerable to privacy attacks.
This work builds up to strengthen model privacy under personalized local privacy by leveraging the privacy amplification effect of the shuffle model.
To the best of our knowledge, the impact of shuffling on personalized local privacy is considered for the first time.
arXiv Detail & Related papers (2023-04-11T21:48:42Z) - Smooth Anonymity for Sparse Graphs [69.1048938123063]
differential privacy has emerged as the gold standard of privacy, however, when it comes to sharing sparse datasets.
In this work, we consider a variation of $k$-anonymity, which we call smooth-$k$-anonymity, and design simple large-scale algorithms that efficiently provide smooth-$k$-anonymity.
arXiv Detail & Related papers (2022-07-13T17:09:25Z) - Privacy Amplification via Shuffling for Linear Contextual Bandits [51.94904361874446]
We study the contextual linear bandit problem with differential privacy (DP)
We show that it is possible to achieve a privacy/utility trade-off between JDP and LDP by leveraging the shuffle model of privacy.
Our result shows that it is possible to obtain a tradeoff between JDP and LDP by leveraging the shuffle model while preserving local privacy.
arXiv Detail & Related papers (2021-12-11T15:23:28Z) - Location Trace Privacy Under Conditional Priors [22.970796265042246]
We propose a R'enyi divergence based privacy framework for bounding expected privacy loss for conditionally dependent data.
We demonstrate an algorithm for achieving this privacy under conditional priors.
arXiv Detail & Related papers (2021-02-23T21:55:34Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.