Single-shot Hyper-parameter Optimization for Federated Learning: A
General Algorithm & Analysis
- URL: http://arxiv.org/abs/2202.08338v1
- Date: Wed, 16 Feb 2022 21:14:34 GMT
- Title: Single-shot Hyper-parameter Optimization for Federated Learning: A
General Algorithm & Analysis
- Authors: Yi Zhou, Parikshit Ram, Theodoros Salonidis, Nathalie Baracaldo, Horst
Samulowitz, Heiko Ludwig
- Abstract summary: We introduce Federated Loss SuRface Aggregation (FLoRA), a general FL-HPO solution framework.
FLoRA enables single-shot FL-HPO: identifying a single set of good hyper- parameters that are subsequently used in a single FL training.
Our empirical evaluation of FLoRA for multiple ML algorithms on seven OpenML datasets demonstrates significant model accuracy improvements over the considered baseline.
- Score: 20.98323380319439
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We address the relatively unexplored problem of hyper-parameter optimization
(HPO) for federated learning (FL-HPO). We introduce Federated Loss SuRface
Aggregation (FLoRA), a general FL-HPO solution framework that can address use
cases of tabular data and any Machine Learning (ML) model including gradient
boosting training algorithms and therefore further expands the scope of FL-HPO.
FLoRA enables single-shot FL-HPO: identifying a single set of good
hyper-parameters that are subsequently used in a single FL training. Thus, it
enables FL-HPO solutions with minimal additional communication overhead
compared to FL training without HPO. We theoretically characterize the
optimality gap of FL-HPO, which explicitly accounts for the heterogeneous
non-IID nature of the parties' local data distributions, a dominant
characteristic of FL systems. Our empirical evaluation of FLoRA for multiple ML
algorithms on seven OpenML datasets demonstrates significant model accuracy
improvements over the considered baseline, and robustness to increasing number
of parties involved in FL-HPO training.
Related papers
- Can We Theoretically Quantify the Impacts of Local Updates on the Generalization Performance of Federated Learning? [50.03434441234569]
Federated Learning (FL) has gained significant popularity due to its effectiveness in training machine learning models across diverse sites without requiring direct data sharing.
While various algorithms have shown that FL with local updates is a communication-efficient distributed learning framework, the generalization performance of FL with local updates has received comparatively less attention.
arXiv Detail & Related papers (2024-09-05T19:00:18Z) - Learner Referral for Cost-Effective Federated Learning Over Hierarchical
IoT Networks [21.76836812021954]
This paper aided federated selection (LRef-FedCS), communications resource, and local model accuracy (LMAO) methods.
Our proposed LRef-FedCS approach could achieve a good balance between high global accuracy and reducing cost.
arXiv Detail & Related papers (2023-07-19T13:33:43Z) - Automated Federated Learning in Mobile Edge Networks -- Fast Adaptation
and Convergence [83.58839320635956]
Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner.
Recent FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets.
This paper addresses how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks.
arXiv Detail & Related papers (2023-03-23T02:42:10Z) - FLAGS Framework for Comparative Analysis of Federated Learning
Algorithms [0.0]
This work consolidates the Federated Learning landscape and offers an objective analysis of the major FL algorithms.
To enable a uniform assessment, a multi-FL framework named FLAGS: Federated Learning AlGorithms Simulation has been developed.
Our experiments indicate that fully decentralized FL algorithms achieve comparable accuracy under multiple operating conditions.
arXiv Detail & Related papers (2022-12-14T12:08:30Z) - Performance Optimization for Variable Bitwidth Federated Learning in
Wireless Networks [103.22651843174471]
This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization.
In the proposed bitwidth FL scheme, edge devices train and transmit quantized versions of their local FL model parameters to a coordinating server, which aggregates them into a quantized global model and synchronizes the devices.
We show that the FL training process can be described as a Markov decision process and propose a model-based reinforcement learning (RL) method to optimize action selection over iterations.
arXiv Detail & Related papers (2022-09-21T08:52:51Z) - FedHPO-B: A Benchmark Suite for Federated Hyperparameter Optimization [50.12374973760274]
We propose and implement a benchmark suite FedHPO-B that incorporates comprehensive FL tasks, enables efficient function evaluations, and eases continuing extensions.
We also conduct extensive experiments based on FedHPO-B to benchmark a few HPO methods.
arXiv Detail & Related papers (2022-06-08T15:29:10Z) - FLoRA: Single-shot Hyper-parameter Optimization for Federated Learning [19.854596038293277]
We introduce Federated Loss suRface Aggregation (FLoRA), the first FL-HPO solution framework.
The framework enables single-shot FL-HPO solutions with minimal additional communication overhead.
Our empirical evaluation of FLoRA for Gradient Boosted Decision Trees on seven OpenML data sets demonstrates significant model accuracy improvements.
arXiv Detail & Related papers (2021-12-15T23:18:32Z) - Joint Optimization of Communications and Federated Learning Over the Air [32.14738452396869]
Federated learning (FL) is an attractive paradigm for making use of rich distributed data while protecting data privacy.
In this paper, we study joint optimization of communications and FL based on analog aggregation transmission in realistic wireless networks.
arXiv Detail & Related papers (2021-04-08T03:38:31Z) - Hybrid Federated Learning: Algorithms and Implementation [61.0640216394349]
Federated learning (FL) is a recently proposed distributed machine learning paradigm dealing with distributed and private data sets.
We propose a new model-matching-based problem formulation for hybrid FL.
We then propose an efficient algorithm that can collaboratively train the global and local models to deal with full and partial featured data.
arXiv Detail & Related papers (2020-12-22T23:56:03Z) - Delay Minimization for Federated Learning Over Wireless Communication
Networks [172.42768672943365]
The problem of delay computation for federated learning (FL) over wireless communication networks is investigated.
A bisection search algorithm is proposed to obtain the optimal solution.
Simulation results show that the proposed algorithm can reduce delay by up to 27.3% compared to conventional FL methods.
arXiv Detail & Related papers (2020-07-05T19:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.