HPN: Personalized Federated Hyperparameter Optimization
- URL: http://arxiv.org/abs/2304.05195v1
- Date: Tue, 11 Apr 2023 13:02:06 GMT
- Title: HPN: Personalized Federated Hyperparameter Optimization
- Authors: Anda Cheng, Zhen Wang, Yaliang Li, Jian Cheng
- Abstract summary: We address two challenges of personalized federated hyperparameter optimization (pFedHPO)
handling the exponentially increased search space and characterizing each client without compromising its data privacy.
We propose learning a textscHypertextscParameter textscNetwork (HPN) fed with client encoding to decide personalized hyperparameters.
- Score: 41.587553874297676
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Numerous research studies in the field of federated learning (FL) have
attempted to use personalization to address the heterogeneity among clients,
one of FL's most crucial and challenging problems. However, existing works
predominantly focus on tailoring models. Yet, due to the heterogeneity of
clients, they may each require different choices of hyperparameters, which have
not been studied so far. We pinpoint two challenges of personalized federated
hyperparameter optimization (pFedHPO): handling the exponentially increased
search space and characterizing each client without compromising its data
privacy. To overcome them, we propose learning a
\textsc{H}yper\textsc{P}arameter \textsc{N}etwork (HPN) fed with client
encoding to decide personalized hyperparameters. The client encoding is
calculated with a random projection-based procedure to protect each client's
privacy. Besides, we design a novel mechanism to debias the low-fidelity
function evaluation samples for learning HPN. We conduct extensive experiments
on FL tasks from various domains, demonstrating the superiority of HPN.
Related papers
- FedCAda: Adaptive Client-Side Optimization for Accelerated and Stable Federated Learning [57.38427653043984]
Federated learning (FL) has emerged as a prominent approach for collaborative training of machine learning models across distributed clients.
We introduce FedCAda, an innovative federated client adaptive algorithm designed to tackle this challenge.
We demonstrate that FedCAda outperforms the state-of-the-art methods in terms of adaptability, convergence, stability, and overall performance.
arXiv Detail & Related papers (2024-05-20T06:12:33Z) - How to Privately Tune Hyperparameters in Federated Learning? Insights from a Benchmark Study [1.4968312514344115]
We use PrivTuna to implement privacy-preserving federated averaging and density-based clustering.
PrivTuna is a novel framework for privacy-preserving HP tuning using multiparty homomorphic encryption.
arXiv Detail & Related papers (2024-02-25T13:25:51Z) - Learn What You Need in Personalized Federated Learning [53.83081622573734]
$textitLearn2pFed$ is a novel algorithm-unrolling-based personalized federated learning framework.
We show that $textitLearn2pFed$ significantly outperforms previous personalized federated learning methods.
arXiv Detail & Related papers (2024-01-16T12:45:15Z) - Privacy-preserving Federated Primal-dual Learning for Non-convex and Non-smooth Problems with Model Sparsification [51.04894019092156]
Federated learning (FL) has been recognized as a rapidly growing area, where the model is trained over clients under the FL orchestration (PS)
In this paper, we propose a novel primal sparification algorithm for and guarantee non-smooth FL problems.
Its unique insightful properties and its analyses are also presented.
arXiv Detail & Related papers (2023-10-30T14:15:47Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Adaptive Federated Learning via New Entropy Approach [14.595709494370372]
Federated Learning (FL) has emerged as a prominent distributed machine learning framework.
In this paper, we propose an adaptive FEDerated learning algorithm based on ENTropy theory (FedEnt) to alleviate the parameter deviation among heterogeneous clients.
arXiv Detail & Related papers (2023-03-27T07:57:04Z) - FedPop: A Bayesian Approach for Personalised Federated Learning [25.67466138369391]
Personalised federated learning aims at collaboratively learning a machine learning model taylored for each client.
We propose a novel methodology coined FedPop by recasting personalised FL into the population modeling paradigm.
Compared to existing personalised FL methods, the proposed methodology has important benefits: it is robust to client drift, practical for inference on new clients, and above all, enables uncertainty quantification under mild computational and memory overheads.
arXiv Detail & Related papers (2022-06-07T22:52:59Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - QuPeD: Quantized Personalization via Distillation with Applications to
Federated Learning [8.420943739336067]
federated learning (FL) aims to train a single global model while collaboratively using multiple clients and a server.
We introduce a textitquantized and textitpersonalized FL algorithm QuPeD that facilitates collective (personalized model compression) training.
Numerically, we validate that QuPeD outperforms competing personalized FL methods, FedAvg, and local training of clients in various heterogeneous settings.
arXiv Detail & Related papers (2021-07-29T10:55:45Z) - QuPeL: Quantized Personalization with Applications to Federated Learning [8.420943739336067]
In this work, we introduce a textitquantized and textitpersonalized FL algorithm QuPeL that facilitates collective training with heterogeneous clients.
For personalization, we allow clients to learn textitcompressed personalized models with different quantization parameters depending on their resources.
Numerically, we show that optimizing over the quantization levels increases the performance and we validate that QuPeL outperforms both FedAvg and local training of clients in a heterogeneous setting.
arXiv Detail & Related papers (2021-02-23T16:43:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.