DP-HYPE: Distributed Differentially Private Hyperparameter Search
- URL: http://arxiv.org/abs/2510.04902v2
- Date: Tue, 07 Oct 2025 07:00:58 GMT
- Title: DP-HYPE: Distributed Differentially Private Hyperparameter Search
- Authors: Johannes Liebenow, Thorsten Peinemann, Esfandiar Mohammadi,
- Abstract summary: We show that DP-HYPE preserves the strong notion of differential privacy called client-level differential privacy.<n>We also provide bounds on its utility guarantees, that is, the probability of reaching a compromise.
- Score: 2.778533469495935
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The tuning of hyperparameters in distributed machine learning can substantially impact model performance. When the hyperparameters are tuned on sensitive data, privacy becomes an important challenge and to this end, differential privacy has emerged as the de facto standard for provable privacy. A standard setting when performing distributed learning tasks is that clients agree on a shared setup, i.e., find a compromise from a set of hyperparameters, like the learning rate of the model to be trained. Yet, prior work on differentially private hyperparameter tuning either uses computationally expensive cryptographic protocols, determines hyperparameters separately for each client, or applies differential privacy locally, which can lead to undesirable utility-privacy trade-offs. In this work, we present our algorithm DP-HYPE, which performs a distributed and privacy-preserving hyperparameter search by conducting a distributed voting based on local hyperparameter evaluations of clients. In this way, DP-HYPE selects hyperparameters that lead to a compromise supported by the majority of clients, while maintaining scalability and independence from specific learning tasks. We prove that DP-HYPE preserves the strong notion of differential privacy called client-level differential privacy and, importantly, show that its privacy guarantees do not depend on the number of hyperparameters. We also provide bounds on its utility guarantees, that is, the probability of reaching a compromise, and implement DP-HYPE as a submodule in the popular Flower framework for distributed machine learning. In addition, we evaluate performance on multiple benchmark data sets in iid as well as multiple non-iid settings and demonstrate high utility of DP-HYPE even under small privacy budgets.
Related papers
- Machine Learning with Privacy for Protected Attributes [56.44253915927481]
We refine the definition of differential privacy (DP) to create a more general and flexible framework that we call feature differential privacy (FDP)<n>Our definition is simulation-based and allows for both addition/removal and replacement variants of privacy, and can handle arbitrary separation of protected and non-protected features.<n>We apply our framework to various machine learning tasks and show that it can significantly improve the utility of DP-trained models when public features are available.
arXiv Detail & Related papers (2025-06-24T17:53:28Z) - Accelerating Differentially Private Federated Learning via Adaptive Extrapolation [2.6108066206600555]
We propose DP-FedEXP, which adaptively selects the global step size based on the diversity of the local updates.<n>We show that DP-FedEXP provably accelerates the convergence of DP-FedAvg and it empirically outperforms existing methods tailored for DP-FL.
arXiv Detail & Related papers (2025-04-14T03:43:27Z) - Enhancing Feature-Specific Data Protection via Bayesian Coordinate Differential Privacy [55.357715095623554]
Local Differential Privacy (LDP) offers strong privacy guarantees without requiring users to trust external parties.
We propose a Bayesian framework, Bayesian Coordinate Differential Privacy (BCDP), that enables feature-specific privacy quantification.
arXiv Detail & Related papers (2024-10-24T03:39:55Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - Noise-Aware Algorithm for Heterogeneous Differentially Private Federated Learning [21.27813247914949]
We propose Robust-HDP, which efficiently estimates the true noise level in clients model updates.<n>It improves utility and convergence speed, while being safe to the clients that may maliciously send falsified privacy parameter to server.
arXiv Detail & Related papers (2024-06-05T17:41:42Z) - Unified Mechanism-Specific Amplification by Subsampling and Group Privacy Amplification [54.1447806347273]
Amplification by subsampling is one of the main primitives in machine learning with differential privacy.
We propose the first general framework for deriving mechanism-specific guarantees.
We analyze how subsampling affects the privacy of groups of multiple users.
arXiv Detail & Related papers (2024-03-07T19:36:05Z) - DP-HyPO: An Adaptive Private Hyperparameter Optimization Framework [31.628466186344582]
We introduce DP-HyPO, a pioneering framework for adaptive'' private hyperparameter optimization.
We provide a comprehensive differential privacy analysis of our framework.
We empirically demonstrate the effectiveness of DP-HyPO on a diverse set of real-world datasets.
arXiv Detail & Related papers (2023-06-09T07:55:46Z) - HPN: Personalized Federated Hyperparameter Optimization [41.587553874297676]
We address two challenges of personalized federated hyperparameter optimization (pFedHPO)
handling the exponentially increased search space and characterizing each client without compromising its data privacy.
We propose learning a textscHypertextscParameter textscNetwork (HPN) fed with client encoding to decide personalized hyperparameters.
arXiv Detail & Related papers (2023-04-11T13:02:06Z) - A New Linear Scaling Rule for Private Adaptive Hyperparameter Optimization [57.450449884166346]
We propose an adaptive HPO method to account for the privacy cost of HPO.
We obtain state-of-the-art performance on 22 benchmark tasks, across computer vision and natural language processing, across pretraining and finetuning.
arXiv Detail & Related papers (2022-12-08T18:56:37Z) - Revisiting Hyperparameter Tuning with Differential Privacy [2.089340709037618]
We provide a framework for privacy-preserving machine learning with differential privacy.<n>We show that its additional privacy loss bound incurred by hyperparameter tuning is upper-bounded by the squared root of the gained utility.<n>We note that the additional privacy loss bound would empirically scale like a squared root of the logarithm of the utility term, benefiting from the design of doubling step.
arXiv Detail & Related papers (2022-11-03T14:42:19Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.