Publicly Verifiable Private Information Retrieval Protocols Based on Function Secret Sharing
- URL: http://arxiv.org/abs/2509.13684v1
- Date: Wed, 17 Sep 2025 04:28:47 GMT
- Title: Publicly Verifiable Private Information Retrieval Protocols Based on Function Secret Sharing
- Authors: Lin Zhu, Lingwei Kong, Xin Ning, Xiaoyang Qu, Jianzong Wang,
- Abstract summary: Private Information Retrieval (PIR) is a cryptographic primitive that enables users to retrieve data from a database without revealing which item is being accessed.<n>We propose two effective constructions of publicly verifiable PIR (PVPIR) in the multi-server setting, which achieve query privacy, correctness, and verifiability simultaneously.
- Score: 45.68069331331365
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Private Information Retrieval (PIR) is a fundamental cryptographic primitive that enables users to retrieve data from a database without revealing which item is being accessed, thereby preserving query privacy. However, PIR protocols also face the challenge of result verifiability, as users expect the reconstructed data to be trustworthy and authentic. In this work, we propose two effective constructions of publicly verifiable PIR (PVPIR) in the multi-server setting, which achieve query privacy, correctness, and verifiability simultaneously. We further present three concrete instantiations based on these constructions. For the point query, our protocol introduces minimal computational overhead and achieves strong verifiability guarantees with significantly lower communication costs compared to existing Merkle tree-based approaches. For the predicate query, the communication complexity of our scheme remains stable as the database size increases, demonstrating strong scalability and suitability for large-scale private query applications.
Related papers
- Parallel Composition for Statistical Privacy [0.0]
A privacy mechanism is proposed that is based on subsampling and randomly partitioning the database to bound the dependency among queries.<n>These bounds show that in realistic application scenarios taking the entropy of distributions into account yields improvements of privacy and precision guarantees.
arXiv Detail & Related papers (2026-02-10T10:13:44Z) - Urania: Differentially Private Insights into AI Use [102.27238986985698]
$Urania$ provides end-to-end privacy protection by leveraging DP tools such as clustering, partition selection, and histogram-based summarization.<n>Results show the framework's ability to extract meaningful conversational insights while maintaining stringent user privacy.
arXiv Detail & Related papers (2025-06-05T07:00:31Z) - Communication-Efficient and Privacy-Adaptable Mechanism for Federated Learning [54.20871516148981]
We introduce the Communication-Efficient and Privacy-Adaptable Mechanism (CEPAM)<n>CEPAM achieves communication efficiency and privacy protection simultaneously.<n>We theoretically analyze the privacy guarantee of CEPAM and investigate the trade-offs among user privacy and accuracy of CEPAM.
arXiv Detail & Related papers (2025-01-21T11:16:05Z) - HOPE: Homomorphic Order-Preserving Encryption for Outsourced Databases -- A Stateless Approach [1.1701842638497677]
Homomorphic OPE (HOPE) is a new OPE scheme that eliminates client-side storage and avoids additional client-server interaction during query execution.
We provide a formal cryptographic analysis of HOPE, proving its security under the widely accepted IND-OCPA model.
arXiv Detail & Related papers (2024-11-26T00:38:46Z) - Private Counterfactual Retrieval [34.11302393278422]
Transparency and explainability are two extremely important aspects to be considered when employing black-box machine learning models in high-stake applications.<n>Providing counterfactual explanations is one way of fulfilling this requirement.<n>We propose multiple schemes inspired by private information retrieval (PIR) techniques which ensure the emphuser's privacy when retrieving counterfactual explanations.
arXiv Detail & Related papers (2024-10-17T17:45:07Z) - Robust Utility-Preserving Text Anonymization Based on Large Language Models [80.5266278002083]
Anonymizing text that contains sensitive information is crucial for a wide range of applications.<n>Existing techniques face the emerging challenges of the re-identification ability of large language models.<n>We propose a framework composed of three key components: a privacy evaluator, a utility evaluator, and an optimization component.
arXiv Detail & Related papers (2024-07-16T14:28:56Z) - Privacy-Enhanced Database Synthesis for Benchmark Publishing (Technical Report) [16.807486872855534]
Differential privacy (DP)-based data synthesis has become a key method for safeguarding privacy when sharing data.<n>This paper delves into differentially private database synthesis specifically for benchmark publishing scenarios.<n>We support the synthesis of high-quality benchmark databases that maintain fidelity in both data distribution and query runtime performance.
arXiv Detail & Related papers (2024-05-02T14:20:24Z) - VPAS: Publicly Verifiable and Privacy-Preserving Aggregate Statistics on Distributed Datasets [4.181095166452762]
We explore the challenge of input validation and public verifiability within privacy-preserving aggregation protocols.
We propose the "VPAS" protocol, which satisfies these requirements.
Our findings indicate that the overhead associated with verifiability in our protocol is 10x lower than that incurred by simply using conventional zkSNARKs.
arXiv Detail & Related papers (2024-03-22T13:50:22Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Provably-secure symmetric private information retrieval with quantum
cryptography [0.0]
We propose using quantum key distribution (QKD) instead for a practical implementation, which can realise both the secure communication and shared randomness requirements.
We prove that QKD maintains the security of the SPIR protocol and that it is also secure against any external eavesdropper.
arXiv Detail & Related papers (2020-04-29T02:08:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.