DP-HyPO: An Adaptive Private Hyperparameter Optimization Framework
- URL: http://arxiv.org/abs/2306.05734v2
- Date: Mon, 27 Nov 2023 01:00:18 GMT
- Title: DP-HyPO: An Adaptive Private Hyperparameter Optimization Framework
- Authors: Hua Wang, Sheng Gao, Huanyu Zhang, Weijie J. Su, Milan Shen
- Abstract summary: We introduce DP-HyPO, a pioneering framework for adaptive'' private hyperparameter optimization.
We provide a comprehensive differential privacy analysis of our framework.
We empirically demonstrate the effectiveness of DP-HyPO on a diverse set of real-world datasets.
- Score: 31.628466186344582
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hyperparameter optimization, also known as hyperparameter tuning, is a widely
recognized technique for improving model performance. Regrettably, when
training private ML models, many practitioners often overlook the privacy risks
associated with hyperparameter optimization, which could potentially expose
sensitive information about the underlying dataset. Currently, the sole
existing approach to allow privacy-preserving hyperparameter optimization is to
uniformly and randomly select hyperparameters for a number of runs,
subsequently reporting the best-performing hyperparameter. In contrast, in
non-private settings, practitioners commonly utilize ``adaptive''
hyperparameter optimization methods such as Gaussian process-based
optimization, which select the next candidate based on information gathered
from previous outputs. This substantial contrast between private and
non-private hyperparameter optimization underscores a critical concern. In our
paper, we introduce DP-HyPO, a pioneering framework for ``adaptive'' private
hyperparameter optimization, aiming to bridge the gap between private and
non-private hyperparameter optimization. To accomplish this, we provide a
comprehensive differential privacy analysis of our framework. Furthermore, we
empirically demonstrate the effectiveness of DP-HyPO on a diverse set of
real-world datasets.
Related papers
- Adaptive Preference Scaling for Reinforcement Learning with Human Feedback [103.36048042664768]
Reinforcement learning from human feedback (RLHF) is a prevalent approach to align AI systems with human values.
We propose a novel adaptive preference loss, underpinned by distributionally robust optimization (DRO)
Our method is versatile and can be readily adapted to various preference optimization frameworks.
arXiv Detail & Related papers (2024-06-04T20:33:22Z) - Differentially Private Fine-Tuning of Diffusion Models [22.454127503937883]
The integration of Differential Privacy with diffusion models (DMs) presents a promising yet challenging frontier.
Recent developments in this field have highlighted the potential for generating high-quality synthetic data by pre-training on public data.
We propose a strategy optimized for private diffusion models, which minimizes the number of trainable parameters to enhance the privacy-utility trade-off.
arXiv Detail & Related papers (2024-06-03T14:18:04Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Practical Differentially Private Hyperparameter Tuning with Subsampling [8.022555128083026]
We propose a new class of differentially private (DP) machine learning (ML) algorithms, where the number of random search samples is randomized itself.
We focus on lowering both the DP bounds and the computational cost of these methods by using only a random subset of the sensitive data.
We provide a R'enyi differential privacy analysis for the proposed method and experimentally show that it consistently leads to better privacy-utility trade-off.
arXiv Detail & Related papers (2023-01-27T21:01:58Z) - A New Linear Scaling Rule for Private Adaptive Hyperparameter Optimization [57.450449884166346]
We propose an adaptive HPO method to account for the privacy cost of HPO.
We obtain state-of-the-art performance on 22 benchmark tasks, across computer vision and natural language processing, across pretraining and finetuning.
arXiv Detail & Related papers (2022-12-08T18:56:37Z) - AUTOMATA: Gradient Based Data Subset Selection for Compute-Efficient
Hyper-parameter Tuning [72.54359545547904]
We propose a gradient-based subset selection framework for hyper- parameter tuning.
We show that using gradient-based data subsets for hyper- parameter tuning achieves significantly faster turnaround times and speedups of 3$times$-30$times$.
arXiv Detail & Related papers (2022-03-15T19:25:01Z) - Private Adaptive Optimization with Side Information [48.91141546624768]
AdaDPS is a general framework that uses non-sensitive side information to precondition the gradients.
We show AdaDPS reduces the amount of noise needed to achieve similar privacy guarantees.
Our results show that AdaDPS improves accuracy by 7.7% (absolute) on average.
arXiv Detail & Related papers (2022-02-12T03:02:06Z) - The Role of Adaptive Optimizers for Honest Private Hyperparameter
Selection [12.38071940409141]
We show that standard composition tools outperform more advanced techniques in many settings.
We draw upon limiting behaviour of Adam in the DP setting to design a new and more efficient tool.
arXiv Detail & Related papers (2021-11-09T01:56:56Z) - Optimizing Large-Scale Hyperparameters via Automated Learning Algorithm [97.66038345864095]
We propose a new hyperparameter optimization method with zeroth-order hyper-gradients (HOZOG)
Specifically, we first formulate hyperparameter optimization as an A-based constrained optimization problem.
Then, we use the average zeroth-order hyper-gradients to update hyper parameters.
arXiv Detail & Related papers (2021-02-17T21:03:05Z) - Tuning Word2vec for Large Scale Recommendation Systems [14.074296985040704]
Word2vec is a powerful machine learning tool that emerged from Natural Lan-guage Processing (NLP)
We show that un-constrained optimization yields an average 221% improvement in hit rate over the parameters.
We demonstrate 138% average improvement in hit rate with aruntime budget-constrained hyper parameter optimization.
arXiv Detail & Related papers (2020-09-24T10:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.