Revisiting Hyperparameter Tuning with Differential Privacy
- URL: http://arxiv.org/abs/2211.01852v1
- Date: Thu, 3 Nov 2022 14:42:19 GMT
- Title: Revisiting Hyperparameter Tuning with Differential Privacy
- Authors: Youlong Ding and Xueyang Wu
- Abstract summary: We provide a framework for privacy-preserving machine learning with differential privacy.
We show that its additional privacy loss bound incurred by hyperparameter tuning is upper-bounded by the squared root of the gained utility.
We note that the additional privacy loss bound would empirically scale like a squared root of the logarithm of the utility term, benefiting from the design of doubling step.
- Score: 1.6425841685973384
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hyperparameter tuning is a common practice in the application of machine
learning but is a typically ignored aspect in the literature on
privacy-preserving machine learning due to its negative effect on the overall
privacy parameter. In this paper, we aim to tackle this fundamental yet
challenging problem by providing an effective hyperparameter tuning framework
with differential privacy. The proposed method allows us to adopt a broader
hyperparameter search space and even to perform a grid search over the whole
space, since its privacy loss parameter is independent of the number of
hyperparameter candidates. Interestingly, it instead correlates with the
utility gained from hyperparameter searching, revealing an explicit and
mandatory trade-off between privacy and utility. Theoretically, we show that
its additional privacy loss bound incurred by hyperparameter tuning is
upper-bounded by the squared root of the gained utility. However, we note that
the additional privacy loss bound would empirically scale like a squared root
of the logarithm of the utility term, benefiting from the design of doubling
step.
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Revisiting Differentially Private Hyper-parameter Tuning [20.278323915802805]
Recent works propose a generic private selection solution for the tuning process, yet a fundamental question persists: is this privacy bound tight?
This paper provides an in-depth examination of this question.
Our findings underscore a substantial gap between current theoretical privacy bound and the empirical bound derived even under strong audit setups.
arXiv Detail & Related papers (2024-02-20T15:29:49Z) - DP-HyPO: An Adaptive Private Hyperparameter Optimization Framework [31.628466186344582]
We introduce DP-HyPO, a pioneering framework for adaptive'' private hyperparameter optimization.
We provide a comprehensive differential privacy analysis of our framework.
We empirically demonstrate the effectiveness of DP-HyPO on a diverse set of real-world datasets.
arXiv Detail & Related papers (2023-06-09T07:55:46Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - A New Linear Scaling Rule for Private Adaptive Hyperparameter Optimization [57.450449884166346]
We propose an adaptive HPO method to account for the privacy cost of HPO.
We obtain state-of-the-art performance on 22 benchmark tasks, across computer vision and natural language processing, across pretraining and finetuning.
arXiv Detail & Related papers (2022-12-08T18:56:37Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - A Blessing of Dimensionality in Membership Inference through
Regularization [29.08230123469755]
We show how the number of parameters of a model can induce a privacy--utility trade-off.
We then show that if coupled with proper generalization regularization, increasing the number of parameters of a model can actually increase both its privacy and performance.
arXiv Detail & Related papers (2022-05-27T15:44:00Z) - Hyperparameter Tuning with Renyi Differential Privacy [31.522386779876598]
We study the privacy leakage resulting from the multiple training runs needed to fine tune the value of a differentially private algorithm.
We provide privacy guarantees for hyperparameter search procedures within the framework of Renyi Differential Privacy.
arXiv Detail & Related papers (2021-10-07T16:58:46Z) - Gaussian Process Uniform Error Bounds with Unknown Hyperparameters for
Safety-Critical Applications [71.23286211775084]
We introduce robust Gaussian process uniform error bounds in settings with unknown hyper parameters.
Our approach computes a confidence region in the space of hyper parameters, which enables us to obtain a probabilistic upper bound for the model error.
Experiments show that the bound performs significantly better than vanilla and fully Bayesian processes.
arXiv Detail & Related papers (2021-09-06T17:10:01Z) - Efficient Hyperparameter Optimization for Differentially Private Deep
Learning [1.7205106391379026]
We formulate a general optimization framework for establishing a desirable privacy-utility tradeoff.
We study three cost-effective algorithms for being used in the proposed framework: evolutionary, Bayesian, and reinforcement learning.
As we believe our work has implications to be utilized in the pipeline of private deep learning, we open-source our code at https://github.com/AmanPriyanshu/DP-HyperparamTuning.
arXiv Detail & Related papers (2021-08-09T09:18:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.