FLoRA: Single-shot Hyper-parameter Optimization for Federated Learning
- URL: http://arxiv.org/abs/2112.08524v1
- Date: Wed, 15 Dec 2021 23:18:32 GMT
- Title: FLoRA: Single-shot Hyper-parameter Optimization for Federated Learning
- Authors: Yi Zhou, Parikshit Ram, Theodoros Salonidis, Nathalie Baracaldo, Horst
Samulowitz, Heiko Ludwig
- Abstract summary: We introduce Federated Loss suRface Aggregation (FLoRA), the first FL-HPO solution framework.
The framework enables single-shot FL-HPO solutions with minimal additional communication overhead.
Our empirical evaluation of FLoRA for Gradient Boosted Decision Trees on seven OpenML data sets demonstrates significant model accuracy improvements.
- Score: 19.854596038293277
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We address the relatively unexplored problem of hyper-parameter optimization
(HPO) for federated learning (FL-HPO). We introduce Federated Loss suRface
Aggregation (FLoRA), the first FL-HPO solution framework that can address use
cases of tabular data and gradient boosting training algorithms in addition to
stochastic gradient descent/neural networks commonly addressed in the FL
literature. The framework enables single-shot FL-HPO, by first identifying a
good set of hyper-parameters that are used in a **single** FL training. Thus,
it enables FL-HPO solutions with minimal additional communication overhead
compared to FL training without HPO. Our empirical evaluation of FLoRA for
Gradient Boosted Decision Trees on seven OpenML data sets demonstrates
significant model accuracy improvements over the considered baseline, and
robustness to increasing number of parties involved in FL-HPO training.
Related papers
- Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - Importance of Smoothness Induced by Optimizers in FL4ASR: Towards
Understanding Federated Learning for End-to-End ASR [12.108696564200052]
We start by training End-to-End Automatic Speech Recognition (ASR) models using Federated Learning (FL)
We examine the fundamental considerations that can be pivotal in minimizing the performance gap in terms of word error rate between models trained using FL versus their centralized counterpart.
arXiv Detail & Related papers (2023-09-22T17:23:01Z) - Learner Referral for Cost-Effective Federated Learning Over Hierarchical
IoT Networks [21.76836812021954]
This paper aided federated selection (LRef-FedCS), communications resource, and local model accuracy (LMAO) methods.
Our proposed LRef-FedCS approach could achieve a good balance between high global accuracy and reducing cost.
arXiv Detail & Related papers (2023-07-19T13:33:43Z) - Automated Federated Learning in Mobile Edge Networks -- Fast Adaptation
and Convergence [83.58839320635956]
Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner.
Recent FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets.
This paper addresses how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks.
arXiv Detail & Related papers (2023-03-23T02:42:10Z) - FLAGS Framework for Comparative Analysis of Federated Learning
Algorithms [0.0]
This work consolidates the Federated Learning landscape and offers an objective analysis of the major FL algorithms.
To enable a uniform assessment, a multi-FL framework named FLAGS: Federated Learning AlGorithms Simulation has been developed.
Our experiments indicate that fully decentralized FL algorithms achieve comparable accuracy under multiple operating conditions.
arXiv Detail & Related papers (2022-12-14T12:08:30Z) - Performance Optimization for Variable Bitwidth Federated Learning in
Wireless Networks [103.22651843174471]
This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization.
In the proposed bitwidth FL scheme, edge devices train and transmit quantized versions of their local FL model parameters to a coordinating server, which aggregates them into a quantized global model and synchronizes the devices.
We show that the FL training process can be described as a Markov decision process and propose a model-based reinforcement learning (RL) method to optimize action selection over iterations.
arXiv Detail & Related papers (2022-09-21T08:52:51Z) - FedHPO-B: A Benchmark Suite for Federated Hyperparameter Optimization [50.12374973760274]
We propose and implement a benchmark suite FedHPO-B that incorporates comprehensive FL tasks, enables efficient function evaluations, and eases continuing extensions.
We also conduct extensive experiments based on FedHPO-B to benchmark a few HPO methods.
arXiv Detail & Related papers (2022-06-08T15:29:10Z) - Single-shot Hyper-parameter Optimization for Federated Learning: A
General Algorithm & Analysis [20.98323380319439]
We introduce Federated Loss SuRface Aggregation (FLoRA), a general FL-HPO solution framework.
FLoRA enables single-shot FL-HPO: identifying a single set of good hyper- parameters that are subsequently used in a single FL training.
Our empirical evaluation of FLoRA for multiple ML algorithms on seven OpenML datasets demonstrates significant model accuracy improvements over the considered baseline.
arXiv Detail & Related papers (2022-02-16T21:14:34Z) - Hybrid Federated Learning: Algorithms and Implementation [61.0640216394349]
Federated learning (FL) is a recently proposed distributed machine learning paradigm dealing with distributed and private data sets.
We propose a new model-matching-based problem formulation for hybrid FL.
We then propose an efficient algorithm that can collaboratively train the global and local models to deal with full and partial featured data.
arXiv Detail & Related papers (2020-12-22T23:56:03Z) - Delay Minimization for Federated Learning Over Wireless Communication
Networks [172.42768672943365]
The problem of delay computation for federated learning (FL) over wireless communication networks is investigated.
A bisection search algorithm is proposed to obtain the optimal solution.
Simulation results show that the proposed algorithm can reduce delay by up to 27.3% compared to conventional FL methods.
arXiv Detail & Related papers (2020-07-05T19:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.