Private Hyperparameter Tuning with Ex-Post Guarantee
- URL: http://arxiv.org/abs/2508.15183v1
- Date: Thu, 21 Aug 2025 02:42:23 GMT
- Title: Private Hyperparameter Tuning with Ex-Post Guarantee
- Authors: Badih Ghazi, Pritish Kamath, Alexander Knop, Ravi Kumar, Pasin Manurangsi, Chiyuan Zhang,
- Abstract summary: " Utility-first" privacy mechanisms prioritize a desired level of utility and then determine the corresponding privacy cost.<n>We extend the work of Wu et al. [ 2019] and Liu and Talwar [ 2019] to support any sequence of private estimators.<n>We demonstrate that hyper parameter tuning for these estimators, including the selection of an optimal privacy budget, can be performed without additional privacy cost.
- Score: 98.43027866582979
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The conventional approach in differential privacy (DP) literature formulates the privacy-utility trade-off with a "privacy-first" perspective: for a predetermined level of privacy, a certain utility is achievable. However, practitioners often operate under a "utility-first" paradigm, prioritizing a desired level of utility and then determining the corresponding privacy cost. Wu et al. [2019] initiated a formal study of this "utility-first" perspective by introducing ex-post DP. They demonstrated that by adding correlated Laplace noise and progressively reducing it on demand, a sequence of increasingly accurate estimates of a private parameter can be generated, with the privacy cost attributed only to the least noisy iterate released. This led to a Laplace mechanism variant that achieves a specified utility with minimal privacy loss. However, their work, and similar findings by Whitehouse et al. [2022], are primarily limited to simple mechanisms based on Laplace or Gaussian noise. In this paper, we significantly generalize these results. In particular, we extend the work of Wu et al. [2019] and Liu and Talwar [2019] to support any sequence of private estimators, incurring at most a doubling of the original privacy budget. Furthermore, we demonstrate that hyperparameter tuning for these estimators, including the selection of an optimal privacy budget, can be performed without additional privacy cost. Finally, we extend our results to ex-post Renyi DP, further broadening the applicability of utility-first privacy mechanisms.
Related papers
- Privacy-Utility Tradeoffs in Quantum Information Processing [13.088625380700933]
We study optimal tradeoffs for both generic and application-specific utility metrics when privacy is quantified by $(varepsilon,)$-quantum local differential privacy.<n>We derive a lower bound on the number of samples required to achieve a fixed accuracy guarantee with high probability.<n>We conclude by initiating the study of private classical shadows, which promise useful applications for private learning tasks.
arXiv Detail & Related papers (2026-02-11T04:21:45Z) - A General Framework for Per-record Differential Privacy [10.959311645622632]
Per-record Differential Privacy (PrDP) addresses this by defining the privacy budget as a function of each record.<n>Existing solutions either handle specific privacy functions or adopt relaxed PrDP definitions.<n>We propose a general and practical framework that enables any standard DP mechanism to support PrDP.
arXiv Detail & Related papers (2025-11-24T11:44:10Z) - High-Dimensional Asymptotics of Differentially Private PCA [4.168157981135696]
In differential privacy, statistics of a sensitive dataset are privatized by introducing random noise.<n>It remains unclear if such high noise levels are truly necessary or a limitation of the proof techniques.<n>This paper explores whether we can obtain sharp privacy characterizations that identify the smallest noise level required to achieve a target privacy level.
arXiv Detail & Related papers (2025-11-10T16:17:16Z) - Meeting Utility Constraints in Differential Privacy: A Privacy-Boosting Approach [7.970280110429423]
We propose a privacy-boosting framework that is compatible with most noise-adding DP mechanisms.<n>Our framework enhances the likelihood of outputs falling within a preferred subset of the support to meet utility requirements.<n>We show that our framework achieves lower privacy loss than standard DP mechanisms under utility constraints.
arXiv Detail & Related papers (2024-12-13T23:34:30Z) - Private Language Models via Truncated Laplacian Mechanism [18.77713904999236]
We propose a novel private embedding method called the high dimensional truncated Laplacian mechanism.
We show that our method has a lower variance compared to the previous private word embedding methods.
Remarkably, even in the high privacy regime, our approach only incurs a slight decrease in utility compared to the non-private scenario.
arXiv Detail & Related papers (2024-10-10T15:25:02Z) - Provable Privacy with Non-Private Pre-Processing [56.770023668379615]
We propose a general framework to evaluate the additional privacy cost incurred by non-private data-dependent pre-processing algorithms.
Our framework establishes upper bounds on the overall privacy guarantees by utilising two new technical notions.
arXiv Detail & Related papers (2024-03-19T17:54:49Z) - Shifted Interpolation for Differential Privacy [6.1836947007564085]
Noisy gradient descent and its variants are the predominant algorithms for differentially private machine learning.
This paper establishes the "privacy amplification by corollary" phenomenon in the unifying framework of $f$-differential privacy.
Notably, this leads to the first exact privacy analysis in the foundational setting of strongly convex optimization.
arXiv Detail & Related papers (2024-03-01T04:50:04Z) - Adaptive Privacy Composition for Accuracy-first Mechanisms [55.53725113597539]
Noise reduction mechanisms produce increasingly accurate answers.
Analysts only pay the privacy cost of the least noisy or most accurate answer released.
There has yet to be any study on how ex-post private mechanisms compose.
We develop privacy filters that allow an analyst to adaptively switch between differentially private and ex-post private mechanisms.
arXiv Detail & Related papers (2023-06-24T00:33:34Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - Brownian Noise Reduction: Maximizing Privacy Subject to Accuracy
Constraints [53.01656650117495]
There is a disconnect between how researchers and practitioners handle privacy-utility tradeoffs.
Brownian mechanism works by first adding Gaussian noise of high variance corresponding to the final point of a simulated Brownian motion.
We complement our Brownian mechanism with ReducedAboveThreshold, a generalization of the classical AboveThreshold algorithm.
arXiv Detail & Related papers (2022-06-15T01:43:37Z) - Smoothed Differential Privacy [55.415581832037084]
Differential privacy (DP) is a widely-accepted and widely-applied notion of privacy based on worst-case analysis.
In this paper, we propose a natural extension of DP following the worst average-case idea behind the celebrated smoothed analysis.
We prove that any discrete mechanism with sampling procedures is more private than what DP predicts, while many continuous mechanisms with sampling procedures are still non-private under smoothed DP.
arXiv Detail & Related papers (2021-07-04T06:55:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.