H-LPS: a hybrid approach for user's location privacy in location-based services
- URL: http://arxiv.org/abs/2212.08241v2
- Date: Wed, 22 Jan 2025 02:34:20 GMT
- Title: H-LPS: a hybrid approach for user's location privacy in location-based services
- Authors: Sonia Sabir, Inayat Ali, Eraj Khan,
- Abstract summary: We have proposed a hybrid location privacy scheme (H-LPS) based on obfuscation and collaboration for protecting users' location privacy.
Our proposed scheme, H-LPS, provides a very high-level of privacy yet provides good accuracy for most of the users.
- Score: 0.0
- License:
- Abstract: Applications providing location-based services (LBS) have gained much attention and importance with the notion of the internet of things (IoT). Users are utilizing LBS by providing their location information to third-party service providers. However, location data is very sensitive that can reveal user's private life to adversaries. The passive and pervasive data collection in IoT upsurges serious issues of location privacy. Privacy-preserving location-based services are a hot research topic. Many anonymization and obfuscation techniques have been proposed to overcome location privacy issues. In this paper, we have proposed a hybrid location privacy scheme (H-LPS), a hybrid scheme mainly based on obfuscation and collaboration for protecting users' location privacy while using location-based services. Obfuscation naturally degrades the quality of service but provides more privacy as compared to anonymization. Our proposed scheme, H-LPS, provides a very high-level of privacy yet provides good accuracy for most of the users. The privacy level and service accuracy of H-LPS are compared with state-of-the-art location privacy schemes and it is shown that H-LPS could be a candidate solution for preserving user location privacy in location-based services.
Related papers
- Enhancing Feature-Specific Data Protection via Bayesian Coordinate Differential Privacy [55.357715095623554]
Local Differential Privacy (LDP) offers strong privacy guarantees without requiring users to trust external parties.
We propose a Bayesian framework, Bayesian Coordinate Differential Privacy (BCDP), that enables feature-specific privacy quantification.
arXiv Detail & Related papers (2024-10-24T03:39:55Z) - PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action [54.11479432110771]
PrivacyLens is a novel framework designed to extend privacy-sensitive seeds into expressive vignettes and further into agent trajectories.
We instantiate PrivacyLens with a collection of privacy norms grounded in privacy literature and crowdsourced seeds.
State-of-the-art LMs, like GPT-4 and Llama-3-70B, leak sensitive information in 25.68% and 38.69% of cases, even when prompted with privacy-enhancing instructions.
arXiv Detail & Related papers (2024-08-29T17:58:38Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - Differentially Private GANs for Generating Synthetic Indoor Location Data [0.09831489366502298]
We introduce an indoor localization framework employing DPGANs in order to generate privacy-preserving indoor location data.
We evaluate the performance of our framework on a real-world indoor localization dataset.
arXiv Detail & Related papers (2024-04-10T21:43:27Z) - Protecting Personalized Trajectory with Differential Privacy under Temporal Correlations [37.88484505367802]
This paper proposes a personalized trajectory privacy protection mechanism (PTPPM)
We identify a protection location set (PLS) for each location by employing the Hilbert curve-based minimum distance search algorithm.
We put forth a novel Permute-and-Flip mechanism for location perturbation, which maps its initial application in data publishing privacy protection to a location perturbation mechanism.
arXiv Detail & Related papers (2024-01-20T12:59:08Z) - Echo of Neighbors: Privacy Amplification for Personalized Private
Federated Learning with Shuffle Model [21.077469463027306]
Federated Learning, as a popular paradigm for collaborative training, is vulnerable to privacy attacks.
This work builds up to strengthen model privacy under personalized local privacy by leveraging the privacy amplification effect of the shuffle model.
To the best of our knowledge, the impact of shuffling on personalized local privacy is considered for the first time.
arXiv Detail & Related papers (2023-04-11T21:48:42Z) - Privacy Amplification via Shuffling for Linear Contextual Bandits [51.94904361874446]
We study the contextual linear bandit problem with differential privacy (DP)
We show that it is possible to achieve a privacy/utility trade-off between JDP and LDP by leveraging the shuffle model of privacy.
Our result shows that it is possible to obtain a tradeoff between JDP and LDP by leveraging the shuffle model while preserving local privacy.
arXiv Detail & Related papers (2021-12-11T15:23:28Z) - Location Trace Privacy Under Conditional Priors [22.970796265042246]
We propose a R'enyi divergence based privacy framework for bounding expected privacy loss for conditionally dependent data.
We demonstrate an algorithm for achieving this privacy under conditional priors.
arXiv Detail & Related papers (2021-02-23T21:55:34Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z) - PGLP: Customizable and Rigorous Location Privacy through Policy Graph [68.3736286350014]
We propose a new location privacy notion called PGLP, which provides a rich interface to release private locations with customizable and rigorous privacy guarantee.
Specifically, we formalize a user's location privacy requirements using a textitlocation policy graph, which is expressive and customizable.
Third, we design a private location trace release framework that pipelines the detection of location exposure, policy graph repair, and private trajectory release with customizable and rigorous location privacy.
arXiv Detail & Related papers (2020-05-04T04:25:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.