General-Purpose $f$-DP Estimation and Auditing in a Black-Box Setting
- URL: http://arxiv.org/abs/2502.07066v1
- Date: Mon, 10 Feb 2025 21:58:17 GMT
- Title: General-Purpose $f$-DP Estimation and Auditing in a Black-Box Setting
- Authors: Önder Askin, Holger Dette, Martin Dunsche, Tim Kutta, Yun Lu, Yu Wei, Vassilis Zikas,
- Abstract summary: We propose new methods to statistically assess $f$-Differential Privacy ($f$-DP)
A challenge when deploying differentially private mechanisms is that DP is hard to validate.
We introduce new black-box methods for $f$-DP that, unlike existing approaches for this privacy notion, do not require prior knowledge of the investigated algorithm.
- Score: 7.052531099272798
- License:
- Abstract: In this paper we propose new methods to statistically assess $f$-Differential Privacy ($f$-DP), a recent refinement of differential privacy (DP) that remedies certain weaknesses of standard DP (including tightness under algorithmic composition). A challenge when deploying differentially private mechanisms is that DP is hard to validate, especially in the black-box setting. This has led to numerous empirical methods for auditing standard DP, while $f$-DP remains less explored. We introduce new black-box methods for $f$-DP that, unlike existing approaches for this privacy notion, do not require prior knowledge of the investigated algorithm. Our procedure yields a complete estimate of the $f$-DP trade-off curve, with theoretical guarantees of convergence. Additionally, we propose an efficient auditing method that empirically detects $f$-DP violations with statistical certainty, merging techniques from non-parametric estimation and optimal classification theory. Through experiments on a range of DP mechanisms, we demonstrate the effectiveness of our estimation and auditing procedures.
Related papers
- Achieving $\widetilde{\mathcal{O}}(\sqrt{T})$ Regret in Average-Reward POMDPs with Known Observation Models [56.92178753201331]
We tackle average-reward infinite-horizon POMDPs with an unknown transition model.
We present a novel and simple estimator that overcomes this barrier.
arXiv Detail & Related papers (2025-01-30T22:29:41Z) - Auditing Differential Privacy Guarantees Using Density Estimation [3.830092569453011]
We present a novel method for accurately auditing the differential privacy guarantees of DP mechanisms.
In particular, our solution is applicable to auditing DP guarantees of machine learning (ML) models.
arXiv Detail & Related papers (2024-06-07T10:52:15Z) - Differentially Private SGD Without Clipping Bias: An Error-Feedback Approach [62.000948039914135]
Using Differentially Private Gradient Descent with Gradient Clipping (DPSGD-GC) to ensure Differential Privacy (DP) comes at the cost of model performance degradation.
We propose a new error-feedback (EF) DP algorithm as an alternative to DPSGD-GC.
We establish an algorithm-specific DP analysis for our proposed algorithm, providing privacy guarantees based on R'enyi DP.
arXiv Detail & Related papers (2023-11-24T17:56:44Z) - Online non-parametric likelihood-ratio estimation by Pearson-divergence
functional minimization [55.98760097296213]
We introduce a new framework for online non-parametric LRE (OLRE) for the setting where pairs of iid observations $(x_t sim p, x'_t sim q)$ are observed over time.
We provide theoretical guarantees for the performance of the OLRE method along with empirical validation in synthetic experiments.
arXiv Detail & Related papers (2023-11-03T13:20:11Z) - Adaptive False Discovery Rate Control with Privacy Guarantee [1.4213973379473654]
We propose a differentially private adaptive FDR control method that can control the classic FDR metric exactly at a user-specified level $alpha$ with privacy guarantee.
Compared to the non-private AdaPT, it incurs a small accuracy loss but significantly reduces the computation cost.
arXiv Detail & Related papers (2023-05-31T01:22:15Z) - Connect the Dots: Tighter Discrete Approximations of Privacy Loss
Distributions [49.726408540784334]
Key question in PLD-based accounting is how to approximate any (potentially continuous) PLD with a PLD over any specified discrete support.
We show that our pessimistic estimate is the best possible among all pessimistic estimates.
arXiv Detail & Related papers (2022-07-10T04:25:02Z) - Normalized/Clipped SGD with Perturbation for Differentially Private
Non-Convex Optimization [94.06564567766475]
DP-SGD and DP-NSGD mitigate the risk of large models memorizing sensitive training data.
We show that these two algorithms achieve similar best accuracy while DP-NSGD is comparatively easier to tune than DP-SGD.
arXiv Detail & Related papers (2022-06-27T03:45:02Z) - Private Stochastic Non-Convex Optimization: Adaptive Algorithms and
Tighter Generalization Bounds [72.63031036770425]
We propose differentially private (DP) algorithms for bound non-dimensional optimization.
We demonstrate two popular deep learning methods on the empirical advantages over standard gradient methods.
arXiv Detail & Related papers (2020-06-24T06:01:24Z) - Tight Differential Privacy for Discrete-Valued Mechanisms and for the
Subsampled Gaussian Mechanism Using FFT [6.929834518749884]
We propose a numerical accountant for evaluating the tight $(varepsilon,delta)$-privacy loss for algorithms with discrete one dimensional output.
We show that our approach allows decreasing noise variance up to 75 percent at equal privacy compared to existing bounds in the literature.
arXiv Detail & Related papers (2020-06-12T12:46:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.