PAC to the Future: Zero-Knowledge Proofs of PAC Private Systems
- URL: http://arxiv.org/abs/2602.11954v1
- Date: Thu, 12 Feb 2026 13:49:22 GMT
- Title: PAC to the Future: Zero-Knowledge Proofs of PAC Private Systems
- Authors: Guilhem Repetto, Nojan Sheybani, Gabrielle De Micheli, Farinaz Koushanfar,
- Abstract summary: This paper introduces a novel framework combining Probably Approximately Correct (PAC) Privacy with zero-knowledge proofs (ZKPs) to provide verifiable privacy guarantees in trustless computing environments.<n>We leverage non-interactive ZKP schemes to generate proofs that attest to the correct implementation of PAC privacy mechanisms while maintaining the confidentiality of proprietary systems.
- Score: 11.574355374384462
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Privacy concerns in machine learning systems have grown significantly with the increasing reliance on sensitive user data for training large-scale models. This paper introduces a novel framework combining Probably Approximately Correct (PAC) Privacy with zero-knowledge proofs (ZKPs) to provide verifiable privacy guarantees in trustless computing environments. Our approach addresses the limitations of traditional privacy-preserving techniques by enabling users to verify both the correctness of computations and the proper application of privacy-preserving noise, particularly in cloud-based systems. We leverage non-interactive ZKP schemes to generate proofs that attest to the correct implementation of PAC privacy mechanisms while maintaining the confidentiality of proprietary systems. Our results demonstrate the feasibility of achieving verifiable PAC privacy in outsourced computation, offering a practical solution for maintaining trust in privacy-preserving machine learning and database systems while ensuring computational integrity.
Related papers
- Breaking the Gaussian Barrier: Residual-PAC Privacy for Automatic Privatization [27.430637970345433]
We show that the upper bound obtained by PAC Privacy algorithms is tight if and only if the perturbed mechanism output is jointly Gaussian with independent noise.<n>We introduce Residual-PAC (R-PAC) Privacy, an f-divergence-based measure to quantify privacy that remains after adversarial inference.<n>Our approach achieves efficient privacy budget utilization for arbitrary data distributions and naturally composes when multiple mechanisms access the dataset.
arXiv Detail & Related papers (2025-06-06T20:52:47Z) - Communication-Efficient and Privacy-Adaptable Mechanism for Federated Learning [54.20871516148981]
We introduce the Communication-Efficient and Privacy-Adaptable Mechanism (CEPAM)<n>CEPAM achieves communication efficiency and privacy protection simultaneously.<n>We theoretically analyze the privacy guarantee of CEPAM and investigate the trade-offs among user privacy and accuracy of CEPAM.
arXiv Detail & Related papers (2025-01-21T11:16:05Z) - ZK-DPPS: A Zero-Knowledge Decentralised Data Sharing and Processing Middleware [3.2995127573095484]
We propose ZK-DPPS, a framework that ensures zero-knowledge communications without the need for traditional ZKPs.
Privacy is preserved through a combination of Fully Homomorphic Encryption (FHE) for computations and Secure Multi-Party Computations (SMPC) for key reconstruction.
We demonstrate the efficacy of ZK-DPPS through a simulated supply chain scenario.
arXiv Detail & Related papers (2024-10-21T01:23:37Z) - Balancing Innovation and Privacy: Data Security Strategies in Natural Language Processing Applications [3.380276187928269]
This research addresses privacy protection in Natural Language Processing (NLP) by introducing a novel algorithm based on differential privacy.
By introducing a differential privacy mechanism, our model ensures the accuracy and reliability of data analysis results while adding random noise.
The proposed algorithm's efficacy is demonstrated through performance metrics such as accuracy (0.89), precision (0.85), and recall (0.88)
arXiv Detail & Related papers (2024-10-11T06:05:10Z) - Bridging Privacy and Robustness for Trustworthy Machine Learning [6.318638597489423]
Machine learning systems require inherent robustness against data perturbations and adversarial manipulations.<n>This paper systematically investigates the intricate theoretical relationships among Local Differential Privacy (LDP) and Maximum Bayesian Privacy (MBP)<n>We bridge these privacy concepts with algorithmic robustness, particularly within the Probably Approximately Correct (PAC) learning framework.
arXiv Detail & Related papers (2024-03-25T10:06:45Z) - Provable Privacy with Non-Private Pre-Processing [56.770023668379615]
We propose a general framework to evaluate the additional privacy cost incurred by non-private data-dependent pre-processing algorithms.
Our framework establishes upper bounds on the overall privacy guarantees by utilising two new technical notions.
arXiv Detail & Related papers (2024-03-19T17:54:49Z) - Declarative Privacy-Preserving Inference Queries [21.890318255305026]
We propose an end-to-end workflow for automating privacy-preserving inference queries.<n>Our proposed novel declarative privacy-preserving workflow allows users to specify "what private information to protect" rather than "how to protect"
arXiv Detail & Related papers (2024-01-22T22:50:59Z) - Verifiable Privacy-Preserving Computing [3.543432625843538]
We analyze existing solutions that combine verifiability with privacy-preserving computations over distributed data.
We classify and compare 37 different schemes, regarding solution approach, security, efficiency, and practicality.
arXiv Detail & Related papers (2023-09-15T08:44:13Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - Tight Auditing of Differentially Private Machine Learning [77.38590306275877]
For private machine learning, existing auditing mechanisms are tight.
They only give tight estimates under implausible worst-case assumptions.
We design an improved auditing scheme that yields tight privacy estimates for natural (not adversarially crafted) datasets.
arXiv Detail & Related papers (2023-02-15T21:40:33Z) - When Crowdsensing Meets Federated Learning: Privacy-Preserving Mobile
Crowdsensing System [12.087658145293522]
Mobile crowdsensing (MCS) is an emerging sensing data collection pattern with scalability, low deployment cost, and distributed characteristics.
Traditional MCS systems suffer from privacy concerns and fair reward distribution.
In this paper, we propose a privacy-preserving MCS system, called textscCrowdFL.
arXiv Detail & Related papers (2021-02-20T15:34:23Z) - PCAL: A Privacy-preserving Intelligent Credit Risk Modeling Framework
Based on Adversarial Learning [111.19576084222345]
This paper proposes a framework of Privacy-preserving Credit risk modeling based on Adversarial Learning (PCAL)
PCAL aims to mask the private information inside the original dataset, while maintaining the important utility information for the target prediction task performance.
Results indicate that PCAL can learn an effective, privacy-free representation from user data, providing a solid foundation towards privacy-preserving machine learning for credit risk analysis.
arXiv Detail & Related papers (2020-10-06T07:04:59Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.