Sequential Auditing for f-Differential Privacy
- URL: http://arxiv.org/abs/2602.06518v1
- Date: Fri, 06 Feb 2026 09:22:24 GMT
- Title: Sequential Auditing for f-Differential Privacy
- Authors: Tim Kutta, Martin Dunsche, Yu Wei, Vassilis Zikas,
- Abstract summary: We present new auditors to assess Differential Privacy (DP) of an algorithm based on output samples.<n>We shift the focus to the highly expressive privacy concept of $f$-DP, in which the entire privacy behavior is captured by a single tradeoff curve.
- Score: 5.7992233755396505
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present new auditors to assess Differential Privacy (DP) of an algorithm based on output samples. Such empirical auditors are common to check for algorithmic correctness and implementation bugs. Most existing auditors are batch-based or targeted toward the traditional notion of $(\varepsilon,δ)$-DP; typically both. In this work, we shift the focus to the highly expressive privacy concept of $f$-DP, in which the entire privacy behavior is captured by a single tradeoff curve. Our auditors detect violations across the full privacy spectrum with statistical significance guarantees, which are supported by theory and simulations. Most importantly, and in contrast to prior work, our auditors do not require a user-specified sample size as an input. Rather, they adaptively determine a near-optimal number of samples needed to reach a decision, thereby avoiding the excessively large sample sizes common in many auditing studies. This reduction in sampling cost becomes especially beneficial for expensive training procedures such as DP-SGD. Our method supports both whitebox and blackbox settings and can also be executed in single-run frameworks.
Related papers
- Privacy in Theory, Bugs in Practice: Grey-Box Auditing of Differential Privacy Libraries [11.924357290256374]
We introduce Re:cord-play, a gray-box auditing paradigm that inspects the internal state of DP algorithms.<n>By running an instrumented algorithm on neighboring datasets with identical randomness, Re:cord-play directly checks for data-dependent control flow.<n>We show that our novel testing approach is both effective and necessary by auditing 12 open-source libraries.
arXiv Detail & Related papers (2026-02-19T15:18:00Z) - Tight Privacy Audit in One Run [14.266167758603986]
We show that our method achieves tight audit results for various differentially private protocols.<n>We also provide experiments that give contrasting conclusions to previous work on the parameter settings for privacy audits in one run.
arXiv Detail & Related papers (2025-09-10T15:55:03Z) - Benchmarking Fraud Detectors on Private Graph Data [70.4654745317714]
Currently, many types of fraud are managed in part by automated detection algorithms that operate over graphs.<n>We consider the scenario where a data holder wishes to outsource development of fraud detectors to third parties.<n>Third parties submit their fraud detectors to the data holder, who evaluates these algorithms on a private dataset and then publicly communicates the results.<n>We propose a realistic privacy attack on this system that allows an adversary to de-anonymize individuals' data based only on the evaluation results.
arXiv Detail & Related papers (2025-07-30T03:20:15Z) - Privacy Audit as Bits Transmission: (Im)possibilities for Audit by One Run [7.850976675388593]
We introduce a unifying framework for privacy audits based on information-theoretic principles.<n>We demystify the method of privacy audit by one run, identifying the conditions under which single-run audits are feasible or infeasible.
arXiv Detail & Related papers (2025-01-29T16:38:51Z) - Adversarial Sample-Based Approach for Tighter Privacy Auditing in Final Model-Only Scenarios [5.116399056871577]
We introduce a novel auditing method that achieves tighter empirical lower bounds without additional assumptions.<n>Our approach surpasses traditional canary-based adversarials and is effective in final model-only scenarios.
arXiv Detail & Related papers (2024-12-02T17:52:16Z) - Auditing $f$-Differential Privacy in One Run [43.34594422920125]
Empirical auditing has emerged as a means of catching some of the flaws in the implementation of privacy-preserving algorithms.
We present a tight and efficient auditing procedure and analysis that can effectively assess the privacy of mechanisms.
arXiv Detail & Related papers (2024-10-29T17:02:22Z) - Nearly Tight Black-Box Auditing of Differentially Private Machine Learning [10.305660258428993]
This paper presents an auditing procedure for the Differentially Private Gradient Descent (DP-SGD) algorithm in the black-box threat model.
The main intuition is to craft worst-case initial model parameters, as DP-SGD's privacy analysis is agnostic to the choice of the initial model parameters.
arXiv Detail & Related papers (2024-05-23T02:24:52Z) - How Private are DP-SGD Implementations? [61.19794019914523]
We show that there can be a substantial gap between the privacy analysis when using the two types of batch sampling.
Our result shows that there can be a substantial gap between the privacy analysis when using the two types of batch sampling.
arXiv Detail & Related papers (2024-03-26T13:02:43Z) - Privacy Profiles for Private Selection [21.162924003105484]
We work out an easy-to-use recipe that bounds privacy profiles of ReportNoisyMax and PrivateTuning using the privacy profiles of the base algorithms they corral.
Our approach improves over all regimes of interest and leads to substantial benefits in end-to-end private learning experiments.
arXiv Detail & Related papers (2024-02-09T08:31:46Z) - Tight Auditing of Differentially Private Machine Learning [77.38590306275877]
For private machine learning, existing auditing mechanisms are tight.
They only give tight estimates under implausible worst-case assumptions.
We design an improved auditing scheme that yields tight privacy estimates for natural (not adversarially crafted) datasets.
arXiv Detail & Related papers (2023-02-15T21:40:33Z) - Sequential Kernelized Independence Testing [77.237958592189]
We design sequential kernelized independence tests inspired by kernelized dependence measures.<n>We demonstrate the power of our approaches on both simulated and real data.
arXiv Detail & Related papers (2022-12-14T18:08:42Z) - Normalized/Clipped SGD with Perturbation for Differentially Private
Non-Convex Optimization [94.06564567766475]
DP-SGD and DP-NSGD mitigate the risk of large models memorizing sensitive training data.
We show that these two algorithms achieve similar best accuracy while DP-NSGD is comparatively easier to tune than DP-SGD.
arXiv Detail & Related papers (2022-06-27T03:45:02Z) - Individual Privacy Accounting for Differentially Private Stochastic Gradient Descent [69.14164921515949]
We characterize privacy guarantees for individual examples when releasing models trained by DP-SGD.
We find that most examples enjoy stronger privacy guarantees than the worst-case bound.
This implies groups that are underserved in terms of model utility simultaneously experience weaker privacy guarantees.
arXiv Detail & Related papers (2022-06-06T13:49:37Z) - Learning with User-Level Privacy [61.62978104304273]
We analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints.
Rather than guaranteeing only the privacy of individual samples, user-level DP protects a user's entire contribution.
We derive an algorithm that privately answers a sequence of $K$ adaptively chosen queries with privacy cost proportional to $tau$, and apply it to solve the learning tasks we consider.
arXiv Detail & Related papers (2021-02-23T18:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.