Revisiting Differentially Private Hyper-parameter Tuning
- URL: http://arxiv.org/abs/2402.13087v2
- Date: Tue, 4 Jun 2024 09:49:34 GMT
- Title: Revisiting Differentially Private Hyper-parameter Tuning
- Authors: Zihang Xiang, Tianhao Wang, Chenglong Wang, Di Wang,
- Abstract summary: Recent works propose a generic private selection solution for the tuning process, yet a fundamental question persists: is this privacy bound tight?
This paper provides an in-depth examination of this question.
Our findings underscore a substantial gap between current theoretical privacy bound and the empirical bound derived even under strong audit setups.
- Score: 20.278323915802805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the application of differential privacy in hyper-parameter tuning, a crucial process in machine learning involving selecting the best hyper-parameter from several candidates. Unlike many private learning algorithms, including the prevalent DP-SGD, the privacy implications of tuning remain insufficiently understood or often totally ignored. Recent works propose a generic private selection solution for the tuning process, yet a fundamental question persists: is this privacy bound tight? This paper provides an in-depth examination of this question. Initially, we provide studies affirming the current privacy analysis for private selection is indeed tight in general. However, when we specifically study the hyper-parameter tuning problem in a white-box setting, such tightness no longer holds. This is first demonstrated by applying privacy audit on the tuning process. Our findings underscore a substantial gap between current theoretical privacy bound and the empirical bound derived even under strong audit setups. This gap motivates our subsequent investigations. Our further study provides improved privacy results for private hyper-parameter tuning due to its distinct properties. Our results demonstrate broader applicability compared to prior analyses, which are limited to specific parameter configurations.
Related papers
- A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - On Differential Privacy and Adaptive Data Analysis with Bounded Space [76.10334958368618]
We study the space complexity of the two related fields of differential privacy and adaptive data analysis.
We show that there exists a problem P that requires exponentially more space to be solved efficiently with differential privacy.
The line of work on adaptive data analysis focuses on understanding the number of samples needed for answering a sequence of adaptive queries.
arXiv Detail & Related papers (2023-02-11T14:45:31Z) - Revisiting Hyperparameter Tuning with Differential Privacy [1.6425841685973384]
We provide a framework for privacy-preserving machine learning with differential privacy.
We show that its additional privacy loss bound incurred by hyperparameter tuning is upper-bounded by the squared root of the gained utility.
We note that the additional privacy loss bound would empirically scale like a squared root of the logarithm of the utility term, benefiting from the design of doubling step.
arXiv Detail & Related papers (2022-11-03T14:42:19Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - Privacy of Noisy Stochastic Gradient Descent: More Iterations without
More Privacy Loss [34.66940399825547]
Industry has widely adopted a simple algorithm: Gradient Descent with noise (a.k.a. Gradient Langevin Dynamics)
Questions about this algorithm's privacy loss remain open -- even in the seemingly simple setting of smooth convex losses over a bounded domain.
We characterize the differential privacy up to a constant factor and show that after a small burn-in period, running SGD longer leaks no further privacy.
arXiv Detail & Related papers (2022-05-27T02:09:55Z) - Debugging Differential Privacy: A Case Study for Privacy Auditing [60.87570714269048]
We show that auditing can also be used to find flaws in (purportedly) differentially private schemes.
In this case study, we audit a recent open source implementation of a differentially private deep learning algorithm and find, with 99.99999999% confidence, that the implementation does not satisfy the claimed differential privacy guarantee.
arXiv Detail & Related papers (2022-02-24T17:31:08Z) - Hyperparameter Tuning with Renyi Differential Privacy [31.522386779876598]
We study the privacy leakage resulting from the multiple training runs needed to fine tune the value of a differentially private algorithm.
We provide privacy guarantees for hyperparameter search procedures within the framework of Renyi Differential Privacy.
arXiv Detail & Related papers (2021-10-07T16:58:46Z) - Smoothed Differential Privacy [55.415581832037084]
Differential privacy (DP) is a widely-accepted and widely-applied notion of privacy based on worst-case analysis.
In this paper, we propose a natural extension of DP following the worst average-case idea behind the celebrated smoothed analysis.
We prove that any discrete mechanism with sampling procedures is more private than what DP predicts, while many continuous mechanisms with sampling procedures are still non-private under smoothed DP.
arXiv Detail & Related papers (2021-07-04T06:55:45Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z) - Individual Privacy Accounting via a Renyi Filter [33.65665839496798]
We give a method for tighter privacy loss accounting based on the value of a personalized privacy loss estimate for each individual.
Our filter is simpler and tighter than the known filter for $(epsilon,delta)$-differential privacy by Rogers et al.
arXiv Detail & Related papers (2020-08-25T17:49:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.