ModularFed: Leveraging Modularity in Federated Learning Frameworks
- URL: http://arxiv.org/abs/2212.10427v1
- Date: Mon, 31 Oct 2022 10:21:19 GMT
- Title: ModularFed: Leveraging Modularity in Federated Learning Frameworks
- Authors: Mohamad Arafeh, Hadi Otrok, Hakima Ould-Slimane, Azzam Mourad,
Chamseddine Talhi, Ernesto Damiani
- Abstract summary: We propose a research-focused framework that addresses the complexity of Federated Learning (FL) implementations.
Within this architecture, protocols are blueprints that strictly define the framework's components' design.
Our protocols aim to enable modularity in FL, supporting third-party plug-and-play architecture and dynamic simulators.
- Score: 8.139264167572213
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Numerous research recently proposed integrating Federated Learning (FL) to
address the privacy concerns of using machine learning in privacy-sensitive
firms. However, the standards of the available frameworks can no longer sustain
the rapid advancement and hinder the integration of FL solutions, which can be
prominent in advancing the field. In this paper, we propose ModularFed, a
research-focused framework that addresses the complexity of FL implementations
and the lack of adaptability and extendability in the available frameworks. We
provide a comprehensive architecture that assists FL approaches through
well-defined protocols to cover three dominant FL paradigms: adaptable
workflow, datasets distribution, and third-party application support. Within
this architecture, protocols are blueprints that strictly define the
framework's components' design, contribute to its flexibility, and strengthen
its infrastructure. Further, our protocols aim to enable modularity in FL,
supporting third-party plug-and-play architecture and dynamic simulators
coupled with major built-in data distributors in the field. Additionally, the
framework support wrapping multiple approaches in a single environment to
enable consistent replication of FL issues such as clients' deficiency, data
distribution, and network latency, which entails a fair comparison of
techniques outlying FL technologies. In our evaluation, we examine the
applicability of our framework addressing three major FL domains, including
statistical distribution and modular-based approaches for resource monitoring
and client selection.
Related papers
- Advances in APPFL: A Comprehensive and Extensible Federated Learning Framework [1.4206132527980742]
Federated learning (FL) is a distributed machine learning paradigm enabling collaborative model training while preserving data privacy.
We present the recent advances in developing APPFL, a framework and benchmarking suite for federated learning.
We demonstrate the capabilities of APPFL through extensive experiments evaluating various aspects of FL, including communication efficiency, privacy preservation, computational performance, and resource utilization.
arXiv Detail & Related papers (2024-09-17T22:20:26Z) - FedModule: A Modular Federated Learning Framework [5.872098693249397]
Federated learning (FL) has been widely adopted across various applications, such as healthcare, finance, and smart cities.
This paper introduces FedModule, a flexible and FL experimental framework.
FedModule adheres to the "one code, all scenarios" principle and employs a modular design that breaks the FL process into individual components.
arXiv Detail & Related papers (2024-09-07T15:03:12Z) - FederatedScope-LLM: A Comprehensive Package for Fine-tuning Large
Language Models in Federated Learning [70.38817963253034]
This paper first discusses these challenges of federated fine-tuning LLMs, and introduces our package FS-LLM as a main contribution.
We provide comprehensive federated parameter-efficient fine-tuning algorithm implementations and versatile programming interfaces for future extension in FL scenarios.
We conduct extensive experiments to validate the effectiveness of FS-LLM and benchmark advanced LLMs with state-of-the-art parameter-efficient fine-tuning algorithms in FL settings.
arXiv Detail & Related papers (2023-09-01T09:40:36Z) - Hierarchical and Decentralised Federated Learning [3.055801139718484]
Hierarchical Federated Learning extends the traditional FL process to enable more efficient model aggregation.
It can improve performance and reduce costs, whilst also enabling FL to be deployed in environments not well-suited to traditional FL.
H-FL will be crucial to future FL solutions as it can aggregate and distribute models at multiple levels to optimally serve the trade-off between locality dependence and global anomaly robustness.
arXiv Detail & Related papers (2023-04-28T17:06:50Z) - Automated Federated Learning in Mobile Edge Networks -- Fast Adaptation
and Convergence [83.58839320635956]
Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner.
Recent FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets.
This paper addresses how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks.
arXiv Detail & Related papers (2023-03-23T02:42:10Z) - UniFed: All-In-One Federated Learning Platform to Unify Open-Source
Frameworks [53.20176108643942]
We present UniFed, the first unified platform for standardizing open-source Federated Learning (FL) frameworks.
UniFed streamlines the end-to-end workflow for distributed experimentation and deployment, encompassing 11 popular open-source FL frameworks.
We evaluate and compare 11 popular FL frameworks from the perspectives of functionality, privacy protection, and performance.
arXiv Detail & Related papers (2022-07-21T05:03:04Z) - FederatedScope: A Comprehensive and Flexible Federated Learning Platform
via Message Passing [63.87056362712879]
We propose a novel and comprehensive federated learning platform, named FederatedScope, which is based on a message-oriented framework.
Compared to the procedural framework, the proposed message-oriented framework is more flexible to express heterogeneous message exchange.
We conduct a series of experiments on the provided easy-to-use and comprehensive FL benchmarks to validate the correctness and efficiency of FederatedScope.
arXiv Detail & Related papers (2022-04-11T11:24:21Z) - Efficient Split-Mix Federated Learning for On-Demand and In-Situ
Customization [107.72786199113183]
Federated learning (FL) provides a distributed learning framework for multiple participants to collaborate learning without sharing raw data.
In this paper, we propose a novel Split-Mix FL strategy for heterogeneous participants that, once training is done, provides in-situ customization of model sizes and robustness.
arXiv Detail & Related papers (2022-03-18T04:58:34Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.