Purifying Approximate Differential Privacy with Randomized Post-processing
- URL: http://arxiv.org/abs/2503.21071v1
- Date: Thu, 27 Mar 2025 01:10:40 GMT
- Title: Purifying Approximate Differential Privacy with Randomized Post-processing
- Authors: Yingyu Lin, Erchi Wang, Yi-An Ma, Yu-Xiang Wang,
- Abstract summary: We propose a framework to convert $(varepsilon, delta)$-approximate Differential Privacy (DP) mechanisms into $(varepsilon, 0)$-pure DP mechanisms.
- Score: 17.069115079029206
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a framework to convert $(\varepsilon, \delta)$-approximate Differential Privacy (DP) mechanisms into $(\varepsilon, 0)$-pure DP mechanisms, a process we call ``purification''. This algorithmic technique leverages randomized post-processing with calibrated noise to eliminate the $\delta$ parameter while preserving utility. By combining the tighter utility bounds and computational efficiency of approximate DP mechanisms with the stronger guarantees of pure DP, our approach achieves the best of both worlds. We illustrate the applicability of this framework in various settings, including Differentially Private Empirical Risk Minimization (DP-ERM), data-dependent DP mechanisms such as Propose-Test-Release (PTR), and query release tasks. To the best of our knowledge, this is the first work to provide a systematic method for transforming approximate DP into pure DP while maintaining competitive accuracy and computational efficiency.
Related papers
- General-Purpose $f$-DP Estimation and Auditing in a Black-Box Setting [7.052531099272798]
We propose new methods to statistically assess $f$-Differential Privacy ($f$-DP)<n>A challenge when deploying differentially private mechanisms is that DP is hard to validate.<n>We introduce new black-box methods for $f$-DP that, unlike existing approaches for this privacy notion, do not require prior knowledge of the investigated algorithm.
arXiv Detail & Related papers (2025-02-10T21:58:17Z) - Improved Communication-Privacy Trade-offs in $L_2$ Mean Estimation under Streaming Differential Privacy [47.997934291881414]
Existing mean estimation schemes are usually optimized for $L_infty$ geometry and rely on random rotation or Kashin's representation to adapt to $L$ geometry.
We introduce a novel privacy accounting method for the sparsified Gaussian mechanism that incorporates the randomness inherent in sparsification into the DP.
Unlike previous approaches, our accounting algorithm directly operates in $L$ geometry, yielding MSEs that fast converge to those of the Gaussian mechanism.
arXiv Detail & Related papers (2024-05-02T03:48:47Z) - Provable Privacy with Non-Private Pre-Processing [56.770023668379615]
We propose a general framework to evaluate the additional privacy cost incurred by non-private data-dependent pre-processing algorithms.
Our framework establishes upper bounds on the overall privacy guarantees by utilising two new technical notions.
arXiv Detail & Related papers (2024-03-19T17:54:49Z) - Differentially Private SGD Without Clipping Bias: An Error-Feedback Approach [62.000948039914135]
Using Differentially Private Gradient Descent with Gradient Clipping (DPSGD-GC) to ensure Differential Privacy (DP) comes at the cost of model performance degradation.
We propose a new error-feedback (EF) DP algorithm as an alternative to DPSGD-GC.
We establish an algorithm-specific DP analysis for our proposed algorithm, providing privacy guarantees based on R'enyi DP.
arXiv Detail & Related papers (2023-11-24T17:56:44Z) - Unified Enhancement of Privacy Bounds for Mixture Mechanisms via
$f$-Differential Privacy [41.51051636162107]
This paper focuses on improving privacy bounds for shuffling models and one-iteration differentially private gradient descent.
We derive a closed-form expression of the trade-off function for shuffling models that outperforms the most up-to-date results.
We also study an $f$-DP analog of the advanced joint convexity of the hockey-stick divergence related to $(epsilon,delta)$-DP.
arXiv Detail & Related papers (2023-10-30T19:37:51Z) - Tractable MCMC for Private Learning with Pure and Gaussian Differential Privacy [23.12198546384976]
Posterior sampling provides $varepsilon$-pure differential privacy guarantees.
It does not suffer from potentially unbounded privacy breach introduced by $(varepsilon,delta)$-approximate DP.
In practice, however, one needs to apply approximate sampling methods such as Markov chain Monte Carlo.
arXiv Detail & Related papers (2023-10-23T07:54:39Z) - Normalized/Clipped SGD with Perturbation for Differentially Private
Non-Convex Optimization [94.06564567766475]
DP-SGD and DP-NSGD mitigate the risk of large models memorizing sensitive training data.
We show that these two algorithms achieve similar best accuracy while DP-NSGD is comparatively easier to tune than DP-SGD.
arXiv Detail & Related papers (2022-06-27T03:45:02Z) - Differentially Private ADMM Algorithms for Machine Learning [38.648113004535155]
We study efficient differentially private alternating direction methods of multipliers (ADMM) via gradient perturbation.
We propose the first differentially private ADMM (DP-ADMM) algorithm with performance guarantee of $(epsilon,delta)$-differential privacy.
arXiv Detail & Related papers (2020-10-31T01:37:24Z) - Private Stochastic Non-Convex Optimization: Adaptive Algorithms and
Tighter Generalization Bounds [72.63031036770425]
We propose differentially private (DP) algorithms for bound non-dimensional optimization.
We demonstrate two popular deep learning methods on the empirical advantages over standard gradient methods.
arXiv Detail & Related papers (2020-06-24T06:01:24Z) - Provably Efficient Safe Exploration via Primal-Dual Policy Optimization [105.7510838453122]
We study the Safe Reinforcement Learning (SRL) problem using the Constrained Markov Decision Process (CMDP) formulation.
We present an provably efficient online policy optimization algorithm for CMDP with safe exploration in the function approximation setting.
arXiv Detail & Related papers (2020-03-01T17:47:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.