Modular Federated Learning: A Meta-Framework Perspective
- URL: http://arxiv.org/abs/2505.08646v1
- Date: Tue, 13 May 2025 15:04:55 GMT
- Title: Modular Federated Learning: A Meta-Framework Perspective
- Authors: Frederico Vicente, Cláudia Soares, Dušan Jakovetić,
- Abstract summary: Federated Learning (FL) enables distributed machine learning training while preserving privacy.<n>Despite its rapid advancements, FL remains a complex and multifaceted field.<n>We introduce a meta-framework perspective, conceptualising FL as a composition of modular components.
- Score: 1.9067130892610522
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) enables distributed machine learning training while preserving privacy, representing a paradigm shift for data-sensitive and decentralized environments. Despite its rapid advancements, FL remains a complex and multifaceted field, requiring a structured understanding of its methodologies, challenges, and applications. In this survey, we introduce a meta-framework perspective, conceptualising FL as a composition of modular components that systematically address core aspects such as communication, optimisation, security, and privacy. We provide a historical contextualisation of FL, tracing its evolution from distributed optimisation to modern distributed learning paradigms. Additionally, we propose a novel taxonomy distinguishing Aggregation from Alignment, introducing the concept of alignment as a fundamental operator alongside aggregation. To bridge theory with practice, we explore available FL frameworks in Python, facilitating real-world implementation. Finally, we systematise key challenges across FL sub-fields, providing insights into open research questions throughout the meta-framework modules. By structuring FL within a meta-framework of modular components and emphasising the dual role of Aggregation and Alignment, this survey provides a holistic and adaptable foundation for understanding and advancing FL research and deployment.
Related papers
- FLsim: A Modular and Library-Agnostic Simulation Framework for Federated Learning [3.62218729239779]
Federated Learning (FL) has undergone significant development since its inception in 2016.<n>We introduce FLsim, a comprehensive FL simulation framework designed to meet the diverse requirements of FL in the literature.<n>We demonstrate the effectiveness and versatility of FLsim in a diverse range of state-of-the-art FL experiments.
arXiv Detail & Related papers (2025-07-15T15:53:01Z) - Feature-Based vs. GAN-Based Learning from Demonstrations: When and Why [50.191655141020505]
This survey provides a comparative analysis of feature-based and GAN-based approaches to learning from demonstrations.<n>We argue that the dichotomy between feature-based and GAN-based methods is increasingly nuanced.
arXiv Detail & Related papers (2025-07-08T11:45:51Z) - In-Context Learning for Gradient-Free Receiver Adaptation: Principles, Applications, and Theory [54.92893355284945]
Deep learning-based wireless receivers offer the potential to dynamically adapt to varying channel environments.<n>Current adaptation strategies, including joint training, hypernetwork-based methods, and meta-learning, either demonstrate limited flexibility or necessitate explicit optimization through gradient descent.<n>This paper presents gradient-free adaptation techniques rooted in the emerging paradigm of in-context learning (ICL)
arXiv Detail & Related papers (2025-06-18T06:43:55Z) - Federated Learning in Practice: Reflections and Projections [17.445826363802997]
Federated Learning (FL) is a machine learning technique that enables multiple entities to collaboratively learn a shared model without exchanging their local data.<n>Production systems from organizations like Google, Apple, and Meta demonstrate the real-world applicability of FL.<n>We propose a redefined FL framework that prioritizes privacy principles rather than rigid definitions.
arXiv Detail & Related papers (2024-10-11T15:10:38Z) - A Survey on Efficient Federated Learning Methods for Foundation Model Training [62.473245910234304]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - Towards Open Federated Learning Platforms: Survey and Vision from
Technical and Legal Perspectives [34.0620974123791]
Traditional Federated Learning (FL) follows a server-dominated cooperation paradigm.
We advocate rethinking the design of current FL frameworks and extending it to a more general concept: Open Federated Learning Platforms.
arXiv Detail & Related papers (2023-07-05T09:30:14Z) - Automated Federated Learning in Mobile Edge Networks -- Fast Adaptation
and Convergence [83.58839320635956]
Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner.
Recent FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets.
This paper addresses how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks.
arXiv Detail & Related papers (2023-03-23T02:42:10Z) - ModularFed: Leveraging Modularity in Federated Learning Frameworks [8.139264167572213]
We propose a research-focused framework that addresses the complexity of Federated Learning (FL) implementations.
Within this architecture, protocols are blueprints that strictly define the framework's components' design.
Our protocols aim to enable modularity in FL, supporting third-party plug-and-play architecture and dynamic simulators.
arXiv Detail & Related papers (2022-10-31T10:21:19Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - FederatedScope: A Comprehensive and Flexible Federated Learning Platform
via Message Passing [63.87056362712879]
We propose a novel and comprehensive federated learning platform, named FederatedScope, which is based on a message-oriented framework.
Compared to the procedural framework, the proposed message-oriented framework is more flexible to express heterogeneous message exchange.
We conduct a series of experiments on the provided easy-to-use and comprehensive FL benchmarks to validate the correctness and efficiency of FederatedScope.
arXiv Detail & Related papers (2022-04-11T11:24:21Z) - Efficient Split-Mix Federated Learning for On-Demand and In-Situ
Customization [107.72786199113183]
Federated learning (FL) provides a distributed learning framework for multiple participants to collaborate learning without sharing raw data.
In this paper, we propose a novel Split-Mix FL strategy for heterogeneous participants that, once training is done, provides in-situ customization of model sizes and robustness.
arXiv Detail & Related papers (2022-03-18T04:58:34Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.