Systematic Evaluation of Geolocation Privacy Mechanisms
- URL: http://arxiv.org/abs/2309.06263v1
- Date: Tue, 12 Sep 2023 14:23:19 GMT
- Title: Systematic Evaluation of Geolocation Privacy Mechanisms
- Authors: Alban Héon, Ryan Sheatsley, Quinn Burke, Blaine Hoak, Eric Pauley, Yohan Beugin, Patrick McDaniel,
- Abstract summary: Location Privacy Preserving Mechanisms (LPPMs) have been proposed by previous works to ensure the privacy of the shared data.
We study the sensitivity of LPPMs on the scenario on which they are used.
- Score: 6.356211727228669
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Location data privacy has become a serious concern for users as Location Based Services (LBSs) have become an important part of their life. It is possible for malicious parties having access to geolocation data to learn sensitive information about the user such as religion or political views. Location Privacy Preserving Mechanisms (LPPMs) have been proposed by previous works to ensure the privacy of the shared data while allowing the users to use LBSs. But there is no clear view of which mechanism to use according to the scenario in which the user makes use of a LBS. The scenario is the way the user is using a LBS (frequency of reports, number of reports). In this paper, we study the sensitivity of LPPMs on the scenario on which they are used. We propose a framework to systematically evaluate LPPMs by considering an exhaustive combination of LPPMs, attacks and metrics. Using our framework we compare a selection of LPPMs including an improved mechanism that we introduce. By evaluating over a variety of scenarios, we find that the efficacy (privacy, utility, and robustness) of the studied mechanisms is dependent on the scenario: for example the privacy of Planar Laplace geo-indistinguishability is greatly reduced in a continuous scenario. We show that the scenario is essential to consider when choosing an obfuscation mechanism for a given application.
Related papers
- Private Counterfactual Retrieval [34.11302393278422]
Transparency and explainability are two extremely important aspects to be considered when employing black-box machine learning models.
Providing counterfactual explanations is one way of catering this requirement.
We propose multiple schemes inspired by private information retrieval (PIR) techniques.
arXiv Detail & Related papers (2024-10-17T17:45:07Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - A Framework for Managing Multifaceted Privacy Leakage While Optimizing Utility in Continuous LBS Interactions [0.0]
We present several novel contributions aimed at advancing the understanding and management of privacy leakage in LBS.
Our contributions provides a more comprehensive framework for analyzing privacy concerns across different facets of location-based interactions.
arXiv Detail & Related papers (2024-04-20T15:20:01Z) - Privacy Amplification for the Gaussian Mechanism via Bounded Support [64.86780616066575]
Data-dependent privacy accounting frameworks such as per-instance differential privacy (pDP) and Fisher information loss (FIL) confer fine-grained privacy guarantees for individuals in a fixed training dataset.
We propose simple modifications of the Gaussian mechanism with bounded support, showing that they amplify privacy guarantees under data-dependent accounting.
arXiv Detail & Related papers (2024-03-07T21:22:07Z) - Unified Mechanism-Specific Amplification by Subsampling and Group Privacy Amplification [54.1447806347273]
Amplification by subsampling is one of the main primitives in machine learning with differential privacy.
We propose the first general framework for deriving mechanism-specific guarantees.
We analyze how subsampling affects the privacy of groups of multiple users.
arXiv Detail & Related papers (2024-03-07T19:36:05Z) - A Learning-based Declarative Privacy-Preserving Framework for Federated Data Management [23.847568516724937]
We introduce a new privacy-preserving technique that uses a deep learning model trained using Differentially-Private Descent (DP-SGD) algorithm.
We then demonstrate a novel declarative privacy-preserving workflow that allows users to specify "what private information to protect" rather than "how to protect"
arXiv Detail & Related papers (2024-01-22T22:50:59Z) - Protecting Personalized Trajectory with Differential Privacy under Temporal Correlations [37.88484505367802]
This paper proposes a personalized trajectory privacy protection mechanism (PTPPM)
We identify a protection location set (PLS) for each location by employing the Hilbert curve-based minimum distance search algorithm.
We put forth a novel Permute-and-Flip mechanism for location perturbation, which maps its initial application in data publishing privacy protection to a location perturbation mechanism.
arXiv Detail & Related papers (2024-01-20T12:59:08Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Optimal and Differentially Private Data Acquisition: Central and Local
Mechanisms [9.599356978682108]
We consider a platform's problem of collecting data from privacy sensitive users to estimate an underlying parameter of interest.
We consider two popular differential privacy settings for providing privacy guarantees for the users: central and local.
We pose the mechanism design problem as the optimal selection of an estimator and payments that will elicit truthful reporting of users' privacy sensitivities.
arXiv Detail & Related papers (2022-01-10T00:27:43Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z) - PGLP: Customizable and Rigorous Location Privacy through Policy Graph [68.3736286350014]
We propose a new location privacy notion called PGLP, which provides a rich interface to release private locations with customizable and rigorous privacy guarantee.
Specifically, we formalize a user's location privacy requirements using a textitlocation policy graph, which is expressive and customizable.
Third, we design a private location trace release framework that pipelines the detection of location exposure, policy graph repair, and private trajectory release with customizable and rigorous location privacy.
arXiv Detail & Related papers (2020-05-04T04:25:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.