FLAGS Framework for Comparative Analysis of Federated Learning
Algorithms
- URL: http://arxiv.org/abs/2212.07179v1
- Date: Wed, 14 Dec 2022 12:08:30 GMT
- Title: FLAGS Framework for Comparative Analysis of Federated Learning
Algorithms
- Authors: Ahnaf Hannan Lodhi, Bar{\i}\c{s} Akg\"un, \"Oznur \"Ozkasap
- Abstract summary: This work consolidates the Federated Learning landscape and offers an objective analysis of the major FL algorithms.
To enable a uniform assessment, a multi-FL framework named FLAGS: Federated Learning AlGorithms Simulation has been developed.
Our experiments indicate that fully decentralized FL algorithms achieve comparable accuracy under multiple operating conditions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) has become a key choice for distributed machine
learning. Initially focused on centralized aggregation, recent works in FL have
emphasized greater decentralization to adapt to the highly heterogeneous
network edge. Among these, Hierarchical, Device-to-Device and Gossip Federated
Learning (HFL, D2DFL \& GFL respectively) can be considered as foundational FL
algorithms employing fundamental aggregation strategies. A number of FL
algorithms were subsequently proposed employing multiple fundamental
aggregation schemes jointly. Existing research, however, subjects the FL
algorithms to varied conditions and gauges the performance of these algorithms
mainly against Federated Averaging (FedAvg) only. This work consolidates the FL
landscape and offers an objective analysis of the major FL algorithms through a
comprehensive cross-evaluation for a wide range of operating conditions. In
addition to the three foundational FL algorithms, this work also analyzes six
derived algorithms. To enable a uniform assessment, a multi-FL framework named
FLAGS: Federated Learning AlGorithms Simulation has been developed for rapid
configuration of multiple FL algorithms. Our experiments indicate that fully
decentralized FL algorithms achieve comparable accuracy under multiple
operating conditions, including asynchronous aggregation and the presence of
stragglers. Furthermore, decentralized FL can also operate in noisy
environments and with a comparably higher local update rate. However, the
impact of extremely skewed data distributions on decentralized FL is much more
adverse than on centralized variants. The results indicate that it may not be
necessary to restrict the devices to a single FL algorithm; rather, multi-FL
nodes may operate with greater efficiency.
Related papers
- Where is the Testbed for my Federated Learning Research? [3.910931245706272]
We present CoLExT, a real-world testbed for federated learning (FL) research.
CoLExT is designed to streamline experimentation with custom FL algorithms in a rich testbed configuration space.
Through an initial investigation involving popular FL algorithms running on CoLExT, we reveal previously unknown trade-offs, inefficiencies, and programming bugs.
arXiv Detail & Related papers (2024-07-19T09:34:04Z) - Not All Federated Learning Algorithms Are Created Equal: A Performance Evaluation Study [1.9265466185360185]
Federated Learning (FL) emerged as a practical approach to training a model from decentralized data.
To bridge this gap, we conduct extensive performance evaluation on several canonical FL algorithms.
Our comprehensive measurement study reveals that no single algorithm works best across different performance metrics.
arXiv Detail & Related papers (2024-03-26T00:33:49Z) - Automated Federated Learning in Mobile Edge Networks -- Fast Adaptation
and Convergence [83.58839320635956]
Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner.
Recent FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets.
This paper addresses how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks.
arXiv Detail & Related papers (2023-03-23T02:42:10Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - NET-FLEET: Achieving Linear Convergence Speedup for Fully Decentralized
Federated Learning with Heterogeneous Data [12.701031075169887]
Federated learning (FL) has received a surge of interest in recent years thanks to its benefits in data privacy protection, efficient communication, and parallel data processing.
Most existing works on FL are limited to systems with i.i.d. data and centralized parameter servers.
We propose a new algorithm, called NET-FLEET, for fully decentralized FL systems with data heterogeneity.
arXiv Detail & Related papers (2022-08-17T19:17:23Z) - Single-shot Hyper-parameter Optimization for Federated Learning: A
General Algorithm & Analysis [20.98323380319439]
We introduce Federated Loss SuRface Aggregation (FLoRA), a general FL-HPO solution framework.
FLoRA enables single-shot FL-HPO: identifying a single set of good hyper- parameters that are subsequently used in a single FL training.
Our empirical evaluation of FLoRA for multiple ML algorithms on seven OpenML datasets demonstrates significant model accuracy improvements over the considered baseline.
arXiv Detail & Related papers (2022-02-16T21:14:34Z) - Hybrid Federated Learning: Algorithms and Implementation [61.0640216394349]
Federated learning (FL) is a recently proposed distributed machine learning paradigm dealing with distributed and private data sets.
We propose a new model-matching-based problem formulation for hybrid FL.
We then propose an efficient algorithm that can collaboratively train the global and local models to deal with full and partial featured data.
arXiv Detail & Related papers (2020-12-22T23:56:03Z) - FedML: A Research Library and Benchmark for Federated Machine Learning [55.09054608875831]
Federated learning (FL) is a rapidly growing research field in machine learning.
Existing FL libraries cannot adequately support diverse algorithmic development.
We introduce FedML, an open research library and benchmark to facilitate FL algorithm development and fair performance comparison.
arXiv Detail & Related papers (2020-07-27T13:02:08Z) - Delay Minimization for Federated Learning Over Wireless Communication
Networks [172.42768672943365]
The problem of delay computation for federated learning (FL) over wireless communication networks is investigated.
A bisection search algorithm is proposed to obtain the optimal solution.
Simulation results show that the proposed algorithm can reduce delay by up to 27.3% compared to conventional FL methods.
arXiv Detail & Related papers (2020-07-05T19:00:07Z) - FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity
to Non-IID Data [59.50904660420082]
Federated Learning (FL) has become a popular paradigm for learning from distributed data.
To effectively utilize data at different devices without moving them to the cloud, algorithms such as the Federated Averaging (FedAvg) have adopted a "computation then aggregation" (CTA) model.
arXiv Detail & Related papers (2020-05-22T23:07:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.