Over-The-Air Clustered Wireless Federated Learning
- URL: http://arxiv.org/abs/2211.03363v3
- Date: Tue, 17 Oct 2023 04:27:42 GMT
- Title: Over-The-Air Clustered Wireless Federated Learning
- Authors: Ayush Madhan-Sohini, Divin Dominic, Nazreen Shah, Ranjitha Prasad
- Abstract summary: Over-the-air (OTA) FL is preferred since the clients can transmit parameter updates simultaneously to a server.
In the absence of a powerful server, decentralised strategy is employed where clients communicate with their neighbors to obtain a consensus ML model.
We propose the OTA semi-decentralised clustered wireless FL (CWFL) and CWFL-Prox algorithms, which is communication efficient as compared to the decentralised FL strategy.
- Score: 2.2530496464901106
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Privacy and bandwidth constraints have led to the use of federated learning
(FL) in wireless systems, where training a machine learning (ML) model is
accomplished collaboratively without sharing raw data. While using
bandwidth-constrained uplink wireless channels, over-the-air (OTA) FL is
preferred since the clients can transmit parameter updates simultaneously to a
server. A powerful server may not be available for parameter aggregation due to
increased latency and server failures. In the absence of a powerful server,
decentralised strategy is employed where clients communicate with their
neighbors to obtain a consensus ML model while incurring huge communication
cost. In this work, we propose the OTA semi-decentralised clustered wireless FL
(CWFL) and CWFL-Prox algorithms, which is communication efficient as compared
to the decentralised FL strategy, while the parameter updates converge to
global minima as O(1/T) for each cluster. Using the MNIST and CIFAR10 datasets,
we demonstrate the accuracy performance of CWFL is comparable to the
central-server based COTAF and proximal constraint based methods, while beating
single-client based ML model by vast margins in accuracy.
Related papers
- Smart Sampling: Helping from Friendly Neighbors for Decentralized Federated Learning [10.917048408073846]
We introduce AFIND+, a simple yet efficient algorithm for sampling and aggregating neighbors in Decentralized FL (DFL)
AFIND+ identifies helpful neighbors, adaptively adjusts the number of selected neighbors, and strategically aggregates the sampled neighbors' models.
Numerical results on real-world datasets demonstrate that AFIND+ outperforms other sampling algorithms in DFL.
arXiv Detail & Related papers (2024-07-05T12:10:54Z) - SpaFL: Communication-Efficient Federated Learning with Sparse Models and Low computational Overhead [75.87007729801304]
SpaFL: a communication-efficient FL framework is proposed to optimize sparse model structures with low computational overhead.
Experiments show that SpaFL improves accuracy while requiring much less communication and computing resources compared to sparse baselines.
arXiv Detail & Related papers (2024-06-01T13:10:35Z) - Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - FLCC: Efficient Distributed Federated Learning on IoMT over CSMA/CA [0.0]
Federated Learning (FL) has emerged as a promising approach for privacy preservation.
This article investigates the performance of FL on an application that might be used to improve a remote healthcare system over ad hoc networks.
We present two metrics to evaluate the network performance: 1) probability of successful transmission while minimizing the interference, and 2) performance of distributed FL model in terms of accuracy and loss.
arXiv Detail & Related papers (2023-03-29T16:36:42Z) - Communication and Storage Efficient Federated Split Learning [19.369076939064904]
Federated Split Learning preserves the parallel model training principle of FL.
Server has to maintain separate models for every client, resulting in a significant computation and storage requirement.
This paper proposes a communication and storage efficient federated and split learning strategy.
arXiv Detail & Related papers (2023-02-11T04:44:29Z) - SlimFL: Federated Learning with Superposition Coding over Slimmable
Neural Networks [56.68149211499535]
Federated learning (FL) is a key enabler for efficient communication and computing leveraging devices' distributed computing capabilities.
This paper proposes a novel learning framework by integrating FL and width-adjustable slimmable neural networks (SNNs)
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2022-03-26T15:06:13Z) - Joint Superposition Coding and Training for Federated Learning over
Multi-Width Neural Networks [52.93232352968347]
This paper aims to integrate two synergetic technologies, federated learning (FL) and width-adjustable slimmable neural network (SNN)
FL preserves data privacy by exchanging the locally trained models of mobile devices. SNNs are however non-trivial, particularly under wireless connections with time-varying channel conditions.
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2021-12-05T11:17:17Z) - RC-SSFL: Towards Robust and Communication-efficient Semi-supervised
Federated Learning System [25.84191221776459]
Federated Learning (FL) is an emerging decentralized artificial intelligence paradigm.
Current systems rely heavily on a strong assumption: all clients have a wealth of ground truth labeled data.
We present a practical Robust, and Communication-efficient Semi-supervised FL (RC-SSFL) system design.
arXiv Detail & Related papers (2020-12-08T14:02:56Z) - Over-the-Air Federated Learning from Heterogeneous Data [107.05618009955094]
Federated learning (FL) is a framework for distributed learning of centralized models.
We develop a Convergent OTA FL (COTAF) algorithm which enhances the common local gradient descent (SGD) FL algorithm.
We numerically show that the precoding induced by COTAF notably improves the convergence rate and the accuracy of models trained via OTA FL.
arXiv Detail & Related papers (2020-09-27T08:28:25Z) - Coded Federated Learning [5.375775284252717]
Federated learning is a method of training a global model from decentralized data distributed across client devices.
Our results show that CFL allows the global model to converge nearly four times faster when compared to an uncoded approach.
arXiv Detail & Related papers (2020-02-21T23:06:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.