Modular Federated Learning
- URL: http://arxiv.org/abs/2209.03090v1
- Date: Wed, 7 Sep 2022 11:54:55 GMT
- Title: Modular Federated Learning
- Authors: Kuo-Yun Liang, Abhishek Srinivasan, Juan Carlos Andresen
- Abstract summary: Federated learning is an approach to train machine learning models on the edge of the networks.
This paper proposes ModFL as a federated learning framework that splits the models into a configuration module and an operation module.
We show that ModFL outperforms FedPer for non-IID data partitions of CIFAR-10 and STL-10 using CNNs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning is an approach to train machine learning models on the
edge of the networks, as close as possible where the data is produced,
motivated by the emerging problem of the inability to stream and centrally
store the large amount of data produced by edge devices as well as by data
privacy concerns. This learning paradigm is in need of robust algorithms to
device heterogeneity and data heterogeneity. This paper proposes ModFL as a
federated learning framework that splits the models into a configuration module
and an operation module enabling federated learning of the individual modules.
This modular approach makes it possible to extract knowlege from a group of
heterogeneous devices as well as from non-IID data produced from its users.
This approach can be viewed as an extension of the federated learning with
personalisation layers FedPer framework that addresses data heterogeneity. We
show that ModFL outperforms FedPer for non-IID data partitions of CIFAR-10 and
STL-10 using CNNs. Our results on time-series data with HAPT, RWHAR, and WISDM
datasets using RNNs remain inconclusive, we argue that the chosen datasets do
not highlight the advantages of ModFL, but in the worst case scenario it
performs as well as FedPer.
Related papers
- FBFL: A Field-Based Coordination Approach for Data Heterogeneity in Federated Learning [1.079960007119637]
This paper formalizes Field-Based Federated Learning (FBFL) and evaluates it extensively using MNIST, FashionMNIST, and Extended MNIST datasets.
We demonstrate that, when operating under IID data conditions, FBFL performs comparably to the widely-used FedAvg algorithm.
In challenging non-IID scenarios, FBFL not only outperforms FedAvg but also surpasses other state-of-the-art methods, namely FedProx and Scaffold.
arXiv Detail & Related papers (2025-02-12T17:10:53Z) - MultiConfederated Learning: Inclusive Non-IID Data handling with Decentralized Federated Learning [1.2726316791083532]
Federated Learning (FL) has emerged as a prominent privacy-preserving technique for enabling use cases like confidential clinical machine learning.
FL operates by aggregating models trained by remote devices which owns the data.
We propose MultiConfederated Learning: a decentralized FL framework which is designed to handle non-IID data.
arXiv Detail & Related papers (2024-04-20T16:38:26Z) - FLIGAN: Enhancing Federated Learning with Incomplete Data using GAN [1.5749416770494706]
Federated Learning (FL) provides a privacy-preserving mechanism for distributed training of machine learning models on networked devices.
We propose FLIGAN, a novel approach to address the issue of data incompleteness in FL.
Our methodology adheres to FL's privacy requirements by generating synthetic data in a federated manner without sharing the actual data in the process.
arXiv Detail & Related papers (2024-03-25T16:49:38Z) - Fake It Till Make It: Federated Learning with Consensus-Oriented
Generation [52.82176415223988]
We propose federated learning with consensus-oriented generation (FedCOG)
FedCOG consists of two key components at the client side: complementary data generation and knowledge-distillation-based model training.
Experiments on classical and real-world FL datasets show that FedCOG consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-12-10T18:49:59Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - Neural Attentive Circuits [93.95502541529115]
We introduce a general purpose, yet modular neural architecture called Neural Attentive Circuits (NACs)
NACs learn the parameterization and a sparse connectivity of neural modules without using domain knowledge.
NACs achieve an 8x speedup at inference time while losing less than 3% performance.
arXiv Detail & Related papers (2022-10-14T18:00:07Z) - Multi-Edge Server-Assisted Dynamic Federated Learning with an Optimized
Floating Aggregation Point [51.47520726446029]
cooperative edge learning (CE-FL) is a distributed machine learning architecture.
We model the processes taken during CE-FL, and conduct analytical training.
We show the effectiveness of our framework with the data collected from a real-world testbed.
arXiv Detail & Related papers (2022-03-26T00:41:57Z) - Secure Neuroimaging Analysis using Federated Learning with Homomorphic
Encryption [14.269757725951882]
Federated learning (FL) enables distributed computation of machine learning models over disparate, remote data sources.
Recent membership attacks show that private or sensitive personal data can sometimes be leaked or inferred when model parameters or summary statistics are shared with a central site.
We propose a framework for secure FL using fully-homomorphic encryption (FHE)
arXiv Detail & Related papers (2021-08-07T12:15:52Z) - FedMix: Approximation of Mixup under Mean Augmented Federated Learning [60.503258658382]
Federated learning (FL) allows edge devices to collectively learn a model without directly sharing data within each device.
Current state-of-the-art algorithms suffer from performance degradation as the heterogeneity of local data across clients increases.
We propose a new augmentation algorithm, named FedMix, which is inspired by a phenomenal yet simple data augmentation method, Mixup.
arXiv Detail & Related papers (2021-07-01T06:14:51Z) - Rethinking Architecture Design for Tackling Data Heterogeneity in
Federated Learning [53.73083199055093]
We show that attention-based architectures (e.g., Transformers) are fairly robust to distribution shifts.
Our experiments show that replacing convolutional networks with Transformers can greatly reduce catastrophic forgetting of previous devices.
arXiv Detail & Related papers (2021-06-10T21:04:18Z) - Evaluation Framework For Large-scale Federated Learning [10.127616622630514]
Federated learning is proposed as a machine learning setting to enable distributed edge devices, such as mobile phones, to collaboratively learn a shared prediction model.
In this paper, we introduce a framework designed for large-scale federated learning which consists of approaches to generating dataset and modular evaluation framework.
arXiv Detail & Related papers (2020-03-03T15:12:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.