FLCC: Efficient Distributed Federated Learning on IoMT over CSMA/CA
- URL: http://arxiv.org/abs/2304.13549v1
- Date: Wed, 29 Mar 2023 16:36:42 GMT
- Title: FLCC: Efficient Distributed Federated Learning on IoMT over CSMA/CA
- Authors: Abdelaziz Salama, Syed Ali Zaidi, Des McLernon, Mohammed M. H. Qazzaz
- Abstract summary: Federated Learning (FL) has emerged as a promising approach for privacy preservation.
This article investigates the performance of FL on an application that might be used to improve a remote healthcare system over ad hoc networks.
We present two metrics to evaluate the network performance: 1) probability of successful transmission while minimizing the interference, and 2) performance of distributed FL model in terms of accuracy and loss.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Federated Learning (FL) has emerged as a promising approach for privacy
preservation, allowing sharing of the model parameters between users and the
cloud server rather than the raw local data. FL approaches have been adopted as
a cornerstone of distributed machine learning (ML) to solve several complex use
cases. FL presents an interesting interplay between communication and ML
performance when implemented over distributed wireless nodes. Both the dynamics
of networking and learning play an important role. In this article, we
investigate the performance of FL on an application that might be used to
improve a remote healthcare system over ad hoc networks which employ CSMA/CA to
schedule its transmissions. Our FL over CSMA/CA (FLCC) model is designed to
eliminate untrusted devices and harness frequency reuse and spatial clustering
techniques to improve the throughput required for coordinating a distributed
implementation of FL in the wireless network.
In our proposed model, frequency allocation is performed on the basis of
spatial clustering performed using virtual cells. Each cell assigns a FL server
and dedicated carrier frequencies to exchange the updated model's parameters
within the cell. We present two metrics to evaluate the network performance: 1)
probability of successful transmission while minimizing the interference, and
2) performance of distributed FL model in terms of accuracy and loss while
considering the networking dynamics.
We benchmark the proposed approach using a well-known MNIST dataset for
performance evaluation. We demonstrate that the proposed approach outperforms
the baseline FL algorithms in terms of explicitly defining the chosen users'
criteria and achieving high accuracy in a robust network.
Related papers
- Vertical Federated Learning over Cloud-RAN: Convergence Analysis and
System Optimization [82.12796238714589]
We propose a novel cloud radio access network (Cloud-RAN) based vertical FL system to enable fast and accurate model aggregation.
We characterize the convergence behavior of the vertical FL algorithm considering both uplink and downlink transmissions.
We establish a system optimization framework by joint transceiver and fronthaul quantization design, for which successive convex approximation and alternate convex search based system optimization algorithms are developed.
arXiv Detail & Related papers (2023-05-04T09:26:03Z) - Automated Federated Learning in Mobile Edge Networks -- Fast Adaptation
and Convergence [83.58839320635956]
Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner.
Recent FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets.
This paper addresses how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks.
arXiv Detail & Related papers (2023-03-23T02:42:10Z) - Digital Over-the-Air Federated Learning in Multi-Antenna Systems [30.137208705209627]
We study the performance optimization of federated learning (FL) over a realistic wireless communication system with digital modulation and over-the-air computation (AirComp)
We propose a modified federated averaging (FedAvg) algorithm that combines digital modulation with AirComp to mitigate wireless fading while ensuring the communication efficiency.
An artificial neural network (ANN) is used to estimate the local FL models of all devices and adjust the beamforming matrices at the PS for future model transmission.
arXiv Detail & Related papers (2023-02-04T07:26:06Z) - Over-The-Air Clustered Wireless Federated Learning [2.2530496464901106]
Over-the-air (OTA) FL is preferred since the clients can transmit parameter updates simultaneously to a server.
In the absence of a powerful server, decentralised strategy is employed where clients communicate with their neighbors to obtain a consensus ML model.
We propose the OTA semi-decentralised clustered wireless FL (CWFL) and CWFL-Prox algorithms, which is communication efficient as compared to the decentralised FL strategy.
arXiv Detail & Related papers (2022-11-07T08:34:35Z) - Performance Optimization for Variable Bitwidth Federated Learning in
Wireless Networks [103.22651843174471]
This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization.
In the proposed bitwidth FL scheme, edge devices train and transmit quantized versions of their local FL model parameters to a coordinating server, which aggregates them into a quantized global model and synchronizes the devices.
We show that the FL training process can be described as a Markov decision process and propose a model-based reinforcement learning (RL) method to optimize action selection over iterations.
arXiv Detail & Related papers (2022-09-21T08:52:51Z) - SlimFL: Federated Learning with Superposition Coding over Slimmable
Neural Networks [56.68149211499535]
Federated learning (FL) is a key enabler for efficient communication and computing leveraging devices' distributed computing capabilities.
This paper proposes a novel learning framework by integrating FL and width-adjustable slimmable neural networks (SNNs)
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2022-03-26T15:06:13Z) - Joint Superposition Coding and Training for Federated Learning over
Multi-Width Neural Networks [52.93232352968347]
This paper aims to integrate two synergetic technologies, federated learning (FL) and width-adjustable slimmable neural network (SNN)
FL preserves data privacy by exchanging the locally trained models of mobile devices. SNNs are however non-trivial, particularly under wireless connections with time-varying channel conditions.
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2021-12-05T11:17:17Z) - Federated Learning Over Cellular-Connected UAV Networks with Non-IID
Datasets [19.792426676330212]
Federated learning (FL) is a promising distributed learning technique.
This paper proposes a new FL model over a cellular-connected unmanned aerial vehicle (UAV) network.
We propose a tractable analytical framework of the uplink outage probability in the cellular-connected UAV network.
arXiv Detail & Related papers (2021-10-13T23:15:20Z) - User Scheduling for Federated Learning Through Over-the-Air Computation [22.853678584121862]
A new machine learning technique termed as federated learning (FL) aims to preserve data at the edge devices and to only exchange ML model parameters in the learning process.
FL not only reduces the communication needs but also helps to protect the local privacy.
AirComp is capable of computing while transmitting data by allowing multiple devices to send data simultaneously by using analog modulation.
arXiv Detail & Related papers (2021-08-05T23:58:15Z) - Delay Minimization for Federated Learning Over Wireless Communication
Networks [172.42768672943365]
The problem of delay computation for federated learning (FL) over wireless communication networks is investigated.
A bisection search algorithm is proposed to obtain the optimal solution.
Simulation results show that the proposed algorithm can reduce delay by up to 27.3% compared to conventional FL methods.
arXiv Detail & Related papers (2020-07-05T19:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.