Fast Private Location-based Information Retrieval Over the Torus
- URL: http://arxiv.org/abs/2407.19871v1
- Date: Mon, 29 Jul 2024 10:42:17 GMT
- Title: Fast Private Location-based Information Retrieval Over the Torus
- Authors: Joon Soo Yoo, Mi Yeon Hong, Ji Won Heo, Kang Hoon Lee, Ji Won Yoon,
- Abstract summary: LocPIR preserves user location privacy when retrieving data from public clouds.
System employs TFHE's expertise in non-polynomial evaluations.
- Score: 2.0680208842600454
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Location-based services offer immense utility, but also pose significant privacy risks. In response, we propose LocPIR, a novel framework using homomorphic encryption (HE), specifically the TFHE scheme, to preserve user location privacy when retrieving data from public clouds. Our system employs TFHE's expertise in non-polynomial evaluations, crucial for comparison operations. LocPIR showcases minimal client-server interaction, reduced memory overhead, and efficient throughput. Performance tests confirm its computational speed, making it a viable solution for practical scenarios, demonstrated via application to a COVID-19 alert model. Thus, LocPIR effectively addresses privacy concerns in location-based services, enabling secure data sharing from the public cloud.
Related papers
- PrivAR: Real-Time Privacy Protection for Location-Based Augmented Reality Applications [5.9049896608422285]
Location-based augmented reality (LB-AR) applications, such as Pok'emon Go, stream sub-second GPS updates.<n>PrivAR is the first client-side privacy framework for real-time LB-AR.
arXiv Detail & Related papers (2025-08-04T16:02:10Z) - Versatile and Fast Location-Based Private Information Retrieval with Fully Homomorphic Encryption over the Torus [4.021179028452984]
We present VeLoPIR, a versatile location-based private information retrieval (PIR) system designed to preserve user privacy.<n>VeLoPIR introduces three operational modes-interval validation, coordinate validation, and identifier matching-that support a broad range of real-world applications.<n>We provide formal security and privacy proofs, confirming the system's robustness under standard cryptographic assumptions.
arXiv Detail & Related papers (2025-06-15T08:01:35Z) - Urania: Differentially Private Insights into AI Use [104.7449031243196]
$Urania$ provides end-to-end privacy protection by leveraging DP tools such as clustering, partition selection, and histogram-based summarization.<n>Results show the framework's ability to extract meaningful conversational insights while maintaining stringent user privacy.
arXiv Detail & Related papers (2025-06-05T07:00:31Z) - PWC-MoE: Privacy-Aware Wireless Collaborative Mixture of Experts [59.5243730853157]
Large language models (LLMs) hosted on cloud servers alleviate the computational and storage burdens on local devices but raise privacy concerns.<n>Small language models (SLMs) running locally enhance privacy but suffer from limited performance on complex tasks.<n>We propose a privacy-aware wireless collaborative mixture of experts (PWC-MoE) framework to balance computational cost, performance, and privacy protection under bandwidth constraints.
arXiv Detail & Related papers (2025-05-13T16:27:07Z) - Femur: A Flexible Framework for Fast and Secure Querying from Public Key-Value Store [17.375796500030916]
Existing Private Information Retrieval schemes provide full security but suffer from poor scalability.
We propose a novel variable-range PIR scheme optimized for bandwidth-constrained environments.
Experiments show that Femur outperforms the state-of-the-art designs even when ensuring the same full security level.
arXiv Detail & Related papers (2025-03-07T12:39:07Z) - Robust Utility-Preserving Text Anonymization Based on Large Language Models [80.5266278002083]
Text anonymization is crucial for sharing sensitive data while maintaining privacy.
Existing techniques face the emerging challenges of re-identification attack ability of Large Language Models.
This paper proposes a framework composed of three LLM-based components -- a privacy evaluator, a utility evaluator, and an optimization component.
arXiv Detail & Related papers (2024-07-16T14:28:56Z) - PFID: Privacy First Inference Delegation Framework for LLMs [34.59282305562392]
This paper introduces a novel privacy-preservation framework named PFID for LLMs.
It addresses critical privacy concerns by localizing user data through model sharding and singular value decomposition.
arXiv Detail & Related papers (2024-06-18T03:27:09Z) - A Framework for Managing Multifaceted Privacy Leakage While Optimizing Utility in Continuous LBS Interactions [0.0]
We present several novel contributions aimed at advancing the understanding and management of privacy leakage in LBS.
Our contributions provides a more comprehensive framework for analyzing privacy concerns across different facets of location-based interactions.
arXiv Detail & Related papers (2024-04-20T15:20:01Z) - Protecting Personalized Trajectory with Differential Privacy under Temporal Correlations [37.88484505367802]
This paper proposes a personalized trajectory privacy protection mechanism (PTPPM)
We identify a protection location set (PLS) for each location by employing the Hilbert curve-based minimum distance search algorithm.
We put forth a novel Permute-and-Flip mechanism for location perturbation, which maps its initial application in data publishing privacy protection to a location perturbation mechanism.
arXiv Detail & Related papers (2024-01-20T12:59:08Z) - Hide and Seek (HaS): A Lightweight Framework for Prompt Privacy
Protection [6.201275002179716]
We introduce the HaS framework, where "H(ide)" and "S(eek)" represent its two core processes: hiding private entities for anonymization and seeking private entities for de-anonymization.
To quantitatively assess HaS's privacy protection performance, we propose both black-box and white-box adversarial models.
arXiv Detail & Related papers (2023-09-06T14:54:11Z) - Smooth Anonymity for Sparse Graphs [69.1048938123063]
differential privacy has emerged as the gold standard of privacy, however, when it comes to sharing sparse datasets.
In this work, we consider a variation of $k$-anonymity, which we call smooth-$k$-anonymity, and design simple large-scale algorithms that efficiently provide smooth-$k$-anonymity.
arXiv Detail & Related papers (2022-07-13T17:09:25Z) - TEE-based decentralized recommender systems: The raw data sharing
redemption [3.0204520109309843]
We present REX, the first enclave-based decentralized recommender.
REX exploits trusted execution environments to improve convergence while preserving privacy.
We analyze the impact of raw data sharing in both deep neural network (DNN) and matrix factorization (MF) recommenders.
arXiv Detail & Related papers (2022-02-23T17:55:39Z) - Privacy-Preserving Image Features via Adversarial Affine Subspace
Embeddings [72.68801373979943]
Many computer vision systems require users to upload image features to the cloud for processing and storage.
We propose a new privacy-preserving feature representation.
Compared to the original features, our approach makes it significantly more difficult for an adversary to recover private information.
arXiv Detail & Related papers (2020-06-11T17:29:48Z) - PGLP: Customizable and Rigorous Location Privacy through Policy Graph [68.3736286350014]
We propose a new location privacy notion called PGLP, which provides a rich interface to release private locations with customizable and rigorous privacy guarantee.
Specifically, we formalize a user's location privacy requirements using a textitlocation policy graph, which is expressive and customizable.
Third, we design a private location trace release framework that pipelines the detection of location exposure, policy graph repair, and private trajectory release with customizable and rigorous location privacy.
arXiv Detail & Related papers (2020-05-04T04:25:59Z) - A Privacy-Preserving Distributed Architecture for
Deep-Learning-as-a-Service [68.84245063902908]
This paper introduces a novel distributed architecture for deep-learning-as-a-service.
It is able to preserve the user sensitive data while providing Cloud-based machine and deep learning services.
arXiv Detail & Related papers (2020-03-30T15:12:03Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.