Tight Privacy Audit in One Run
- URL: http://arxiv.org/abs/2509.08704v1
- Date: Wed, 10 Sep 2025 15:55:03 GMT
- Title: Tight Privacy Audit in One Run
- Authors: Zihang Xiang, Tianhao Wang, Hanshen Xiao, Yuan Tian, Di Wang,
- Abstract summary: We show that our method achieves tight audit results for various differentially private protocols.<n>We also provide experiments that give contrasting conclusions to previous work on the parameter settings for privacy audits in one run.
- Score: 14.266167758603986
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study the problem of privacy audit in one run and show that our method achieves tight audit results for various differentially private protocols. This includes obtaining tight results for auditing $(\varepsilon,\delta)$-DP algorithms where all previous work fails to achieve in any parameter setups. We first formulate a framework for privacy audit \textit{in one run} with refinement compared with previous work. Then, based on modeling privacy by the $f$-DP formulation, we study the implications of our framework to obtain a theoretically justified lower bound for privacy audit. In the experiment, we compare with previous work and show that our audit method outperforms the rest in auditing various differentially private algorithms. We also provide experiments that give contrasting conclusions to previous work on the parameter settings for privacy audits in one run.
Related papers
- Sequential Auditing for f-Differential Privacy [5.7992233755396505]
We present new auditors to assess Differential Privacy (DP) of an algorithm based on output samples.<n>We shift the focus to the highly expressive privacy concept of $f$-DP, in which the entire privacy behavior is captured by a single tradeoff curve.
arXiv Detail & Related papers (2026-02-06T09:22:24Z) - How Well Can Differential Privacy Be Audited in One Run? [2.687273760177295]
We characterize the maximum achievable efficacy of one-run auditing and show that the key barrier to its efficacy is interference between the observable effects of different data elements.<n>We present new conceptual approaches to minimize this barrier, towards improving the performance of one-run auditing of real machine learning algorithms.
arXiv Detail & Related papers (2025-03-10T11:32:30Z) - Privacy Audit as Bits Transmission: (Im)possibilities for Audit by One Run [7.850976675388593]
We introduce a unifying framework for privacy audits based on information-theoretic principles.<n>We demystify the method of privacy audit by one run, identifying the conditions under which single-run audits are feasible or infeasible.
arXiv Detail & Related papers (2025-01-29T16:38:51Z) - Adversarial Sample-Based Approach for Tighter Privacy Auditing in Final Model-Only Scenarios [5.116399056871577]
We introduce a novel auditing method that achieves tighter empirical lower bounds without additional assumptions.<n>Our approach surpasses traditional canary-based adversarials and is effective in final model-only scenarios.
arXiv Detail & Related papers (2024-12-02T17:52:16Z) - Auditing $f$-Differential Privacy in One Run [43.34594422920125]
Empirical auditing has emerged as a means of catching some of the flaws in the implementation of privacy-preserving algorithms.
We present a tight and efficient auditing procedure and analysis that can effectively assess the privacy of mechanisms.
arXiv Detail & Related papers (2024-10-29T17:02:22Z) - Provable Privacy with Non-Private Pre-Processing [56.770023668379615]
We propose a general framework to evaluate the additional privacy cost incurred by non-private data-dependent pre-processing algorithms.
Our framework establishes upper bounds on the overall privacy guarantees by utilising two new technical notions.
arXiv Detail & Related papers (2024-03-19T17:54:49Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - Tight Auditing of Differentially Private Machine Learning [77.38590306275877]
For private machine learning, existing auditing mechanisms are tight.
They only give tight estimates under implausible worst-case assumptions.
We design an improved auditing scheme that yields tight privacy estimates for natural (not adversarially crafted) datasets.
arXiv Detail & Related papers (2023-02-15T21:40:33Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - Debugging Differential Privacy: A Case Study for Privacy Auditing [60.87570714269048]
We show that auditing can also be used to find flaws in (purportedly) differentially private schemes.
In this case study, we audit a recent open source implementation of a differentially private deep learning algorithm and find, with 99.99999999% confidence, that the implementation does not satisfy the claimed differential privacy guarantee.
arXiv Detail & Related papers (2022-02-24T17:31:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.