Monitoring Violations of Differential Privacy over Time
- URL: http://arxiv.org/abs/2509.20283v1
- Date: Wed, 24 Sep 2025 16:15:51 GMT
- Title: Monitoring Violations of Differential Privacy over Time
- Authors: Önder Askin, Tim Kutta, Holger Dette,
- Abstract summary: We investigate the sustained auditing of a mechanism that can go through changes during its development or deployment.<n>Running state-of-the-art (static) auditors repeatedly requires excessive sampling efforts.<n>We present a new monitoring procedure that extracts information from the entire deployment history of the algorithm.
- Score: 3.3900431852643114
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Auditing differential privacy has emerged as an important area of research that supports the design of privacy-preserving mechanisms. Privacy audits help to obtain empirical estimates of the privacy parameter, to expose flawed implementations of algorithms and to compare practical with theoretical privacy guarantees. In this work, we investigate an unexplored facet of privacy auditing: the sustained auditing of a mechanism that can go through changes during its development or deployment. Monitoring the privacy of algorithms over time comes with specific challenges. Running state-of-the-art (static) auditors repeatedly requires excessive sampling efforts, while the reliability of such methods deteriorates over time without proper adjustments. To overcome these obstacles, we present a new monitoring procedure that extracts information from the entire deployment history of the algorithm. This allows us to reduce sampling efforts, while sustaining reliable outcomes of our auditor. We derive formal guarantees with regard to the soundness of our methods and evaluate their performance for important mechanisms from the literature. Our theoretical findings and experiments demonstrate the efficacy of our approach.
Related papers
- Observational Auditing of Label Privacy [16.143689489883382]
Differential privacy (DP) auditing is essential for evaluating privacy guarantees in machine learning systems.<n>Existing auditing methods require modifying the training dataset -- for instance, by injecting out-of-distribution canaries or removing samples from training.<n>We introduce a novel observational auditing framework that leverages the inherent randomness of data distributions.
arXiv Detail & Related papers (2025-11-18T03:12:59Z) - Tight Privacy Audit in One Run [14.266167758603986]
We show that our method achieves tight audit results for various differentially private protocols.<n>We also provide experiments that give contrasting conclusions to previous work on the parameter settings for privacy audits in one run.
arXiv Detail & Related papers (2025-09-10T15:55:03Z) - Do We Need to Verify Step by Step? Rethinking Process Supervision from a Theoretical Perspective [59.61868506896214]
We show that under standard data coverage assumptions, reinforcement learning is no more statistically difficult than through process supervision.<n>We prove that any policy's advantage function can serve as an optimal process reward model.
arXiv Detail & Related papers (2025-02-14T22:21:56Z) - Privacy Audit as Bits Transmission: (Im)possibilities for Audit by One Run [7.850976675388593]
We introduce a unifying framework for privacy audits based on information-theoretic principles.<n>We demystify the method of privacy audit by one run, identifying the conditions under which single-run audits are feasible or infeasible.
arXiv Detail & Related papers (2025-01-29T16:38:51Z) - Auditing $f$-Differential Privacy in One Run [43.34594422920125]
Empirical auditing has emerged as a means of catching some of the flaws in the implementation of privacy-preserving algorithms.
We present a tight and efficient auditing procedure and analysis that can effectively assess the privacy of mechanisms.
arXiv Detail & Related papers (2024-10-29T17:02:22Z) - TernaryVote: Differentially Private, Communication Efficient, and
Byzantine Resilient Distributed Optimization on Heterogeneous Data [50.797729676285876]
We propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously.
We theoretically quantify the privacy guarantee through the lens of the emerging f-differential privacy (DP) and the Byzantine resilience of the proposed algorithm.
arXiv Detail & Related papers (2024-02-16T16:41:14Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - Tight Auditing of Differentially Private Machine Learning [77.38590306275877]
For private machine learning, existing auditing mechanisms are tight.
They only give tight estimates under implausible worst-case assumptions.
We design an improved auditing scheme that yields tight privacy estimates for natural (not adversarially crafted) datasets.
arXiv Detail & Related papers (2023-02-15T21:40:33Z) - A General Framework for Auditing Differentially Private Machine Learning [27.99806936918949]
We present a framework to statistically audit the privacy guarantee conferred by a differentially private machine learner in practice.
Our work develops a general methodology to empirically evaluate the privacy of differentially private machine learning implementations.
arXiv Detail & Related papers (2022-10-16T21:34:18Z) - Debugging Differential Privacy: A Case Study for Privacy Auditing [60.87570714269048]
We show that auditing can also be used to find flaws in (purportedly) differentially private schemes.
In this case study, we audit a recent open source implementation of a differentially private deep learning algorithm and find, with 99.99999999% confidence, that the implementation does not satisfy the claimed differential privacy guarantee.
arXiv Detail & Related papers (2022-02-24T17:31:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.