How to Privately Tune Hyperparameters in Federated Learning? Insights from a Benchmark Study
- URL: http://arxiv.org/abs/2402.16087v2
- Date: Wed, 22 May 2024 13:56:57 GMT
- Title: How to Privately Tune Hyperparameters in Federated Learning? Insights from a Benchmark Study
- Authors: Natalija Mitic, Apostolos Pyrgelis, Sinem Sav,
- Abstract summary: We use PrivTuna to implement privacy-preserving federated averaging and density-based clustering.
PrivTuna is a novel framework for privacy-preserving HP tuning using multiparty homomorphic encryption.
- Score: 1.4968312514344115
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we address the problem of privacy-preserving hyperparameter (HP) tuning for cross-silo federated learning (FL). We first perform a comprehensive measurement study that benchmarks various HP strategies suitable for FL. Our benchmarks show that the optimal parameters of the FL server, e.g., the learning rate, can be accurately and efficiently tuned based on the HPs found by each client on its local data. We demonstrate that HP averaging is suitable for iid settings, while density-based clustering can uncover the optimal set of parameters in non-iid ones. Then, to prevent information leakage from the exchange of the clients' local HPs, we design and implement PrivTuna, a novel framework for privacy-preserving HP tuning using multiparty homomorphic encryption. We use PrivTuna to implement privacy-preserving federated averaging and density-based clustering, and we experimentally evaluate its performance demonstrating its computation/communication efficiency and its precision in tuning hyperparameters.
Related papers
- Immersion and Invariance-based Coding for Privacy-Preserving Federated
Learning [1.5989047000011911]
Federated learning (FL) has emerged as a method to preserve privacy in collaborative distributed learning.
We introduce a privacy-preserving FL framework that combines differential privacy and system immersion tools from control theory.
We demonstrate that the proposed privacy-preserving scheme can be tailored to offer any desired level of differential privacy for both local and global model parameters.
arXiv Detail & Related papers (2024-09-25T15:04:42Z) - Private and Federated Stochastic Convex Optimization: Efficient Strategies for Centralized Systems [8.419845742978985]
This paper addresses the challenge of preserving privacy in Federated Learning (FL) within centralized systems.
We devise methods that ensure Differential Privacy (DP) while maintaining optimal convergence rates for homogeneous and heterogeneous data distributions.
arXiv Detail & Related papers (2024-07-17T08:19:58Z) - Efficient Federated Prompt Tuning for Black-box Large Pre-trained Models [62.838689691468666]
We propose Federated Black-Box Prompt Tuning (Fed-BBPT) to optimally harness each local dataset.
Fed-BBPT capitalizes on a central server that aids local users in collaboratively training a prompt generator through regular aggregation.
Relative to extensive fine-tuning, Fed-BBPT proficiently sidesteps memory challenges tied to PTM storage and fine-tuning on local machines.
arXiv Detail & Related papers (2023-10-04T19:30:49Z) - FedPop: Federated Population-based Hyperparameter Tuning [30.45354486897489]
Federated Learning (FL) is a distributed machine learning (ML) paradigm, in which multiple clients collaboratively train ML models without centralizing their local data.
Despite extensive research on tuning HPs for centralized ML, these methods yield suboptimal results when employed in FL.
This is mainly because their "training-after-tuning" framework is unsuitable for FL with limited client power.
arXiv Detail & Related papers (2023-08-16T19:14:52Z) - Differentially Private Wireless Federated Learning Using Orthogonal
Sequences [56.52483669820023]
We propose a privacy-preserving uplink over-the-air computation (AirComp) method, termed FLORAS.
We prove that FLORAS offers both item-level and client-level differential privacy guarantees.
A new FL convergence bound is derived which, combined with the privacy guarantees, allows for a smooth tradeoff between the achieved convergence rate and differential privacy levels.
arXiv Detail & Related papers (2023-06-14T06:35:10Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - HPN: Personalized Federated Hyperparameter Optimization [41.587553874297676]
We address two challenges of personalized federated hyperparameter optimization (pFedHPO)
handling the exponentially increased search space and characterizing each client without compromising its data privacy.
We propose learning a textscHypertextscParameter textscNetwork (HPN) fed with client encoding to decide personalized hyperparameters.
arXiv Detail & Related papers (2023-04-11T13:02:06Z) - A New Linear Scaling Rule for Private Adaptive Hyperparameter Optimization [57.450449884166346]
We propose an adaptive HPO method to account for the privacy cost of HPO.
We obtain state-of-the-art performance on 22 benchmark tasks, across computer vision and natural language processing, across pretraining and finetuning.
arXiv Detail & Related papers (2022-12-08T18:56:37Z) - FedHPO-B: A Benchmark Suite for Federated Hyperparameter Optimization [50.12374973760274]
We propose and implement a benchmark suite FedHPO-B that incorporates comprehensive FL tasks, enables efficient function evaluations, and eases continuing extensions.
We also conduct extensive experiments based on FedHPO-B to benchmark a few HPO methods.
arXiv Detail & Related papers (2022-06-08T15:29:10Z) - Stochastic Coded Federated Learning with Convergence and Privacy
Guarantees [8.2189389638822]
Federated learning (FL) has attracted much attention as a privacy-preserving distributed machine learning framework.
This paper proposes a coded federated learning framework, namely coded federated learning (SCFL) to mitigate the straggler issue.
We characterize the privacy guarantee by the mutual information differential privacy (MI-DP) and analyze the convergence performance in federated learning.
arXiv Detail & Related papers (2022-01-25T04:43:29Z) - Differentially Private Federated Bayesian Optimization with Distributed
Exploration [48.9049546219643]
We introduce differential privacy (DP) into the training of deep neural networks through a general framework for adding DP to iterative algorithms.
We show that DP-FTS-DE achieves high utility (competitive performance) with a strong privacy guarantee.
We also use real-world experiments to show that DP-FTS-DE induces a trade-off between privacy and utility.
arXiv Detail & Related papers (2021-10-27T04:11:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.