SlimFL: Federated Learning with Superposition Coding over Slimmable
Neural Networks
- URL: http://arxiv.org/abs/2203.14094v1
- Date: Sat, 26 Mar 2022 15:06:13 GMT
- Title: SlimFL: Federated Learning with Superposition Coding over Slimmable
Neural Networks
- Authors: Won Joon Yun, Yunseok Kwak, Hankyul Baek, Soyi Jung, Mingyue Ji, Mehdi
Bennis, Jihong Park, and Joongheon Kim
- Abstract summary: Federated learning (FL) is a key enabler for efficient communication and computing leveraging devices' distributed computing capabilities.
This paper proposes a novel learning framework by integrating FL and width-adjustable slimmable neural networks (SNNs)
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
- Score: 56.68149211499535
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Federated learning (FL) is a key enabler for efficient communication and
computing leveraging devices' distributed computing capabilities. However,
applying FL in practice is challenging due to the local devices' heterogeneous
energy, wireless channel conditions, and non-independently and identically
distributed (non-IID) data distributions. To cope with these issues, this paper
proposes a novel learning framework by integrating FL and width-adjustable
slimmable neural networks (SNN). Integrating FL with SNNs is challenging due to
time-varing channel conditions and data distributions. In addition, existing
multi-width SNN training algorithms are sensitive to the data distributions
across devices, which makes SNN ill-suited for FL. Motivated by this, we
propose a communication and energy-efficient SNN-based FL (named SlimFL) that
jointly utilizes superposition coding (SC) for global model aggregation and
superposition training (ST) for updating local models. By applying SC, SlimFL
exchanges the superposition of multiple width configurations decoded as many
times as possible for a given communication throughput. Leveraging ST, SlimFL
aligns the forward propagation of different width configurations while avoiding
inter-width interference during backpropagation. We formally prove the
convergence of SlimFL. The result reveals that SlimFL is not only
communication-efficient but also deals with the non-IID data distributions and
poor channel conditions, which is also corroborated by data-intensive
simulations.
Related papers
- Hyperdimensional Computing Empowered Federated Foundation Model over Wireless Networks for Metaverse [56.384390765357004]
We propose an integrated federated split learning and hyperdimensional computing framework for emerging foundation models.
This novel approach reduces communication costs, computation load, and privacy risks, making it suitable for resource-constrained edge devices in the Metaverse.
arXiv Detail & Related papers (2024-08-26T17:03:14Z) - FLCC: Efficient Distributed Federated Learning on IoMT over CSMA/CA [0.0]
Federated Learning (FL) has emerged as a promising approach for privacy preservation.
This article investigates the performance of FL on an application that might be used to improve a remote healthcare system over ad hoc networks.
We present two metrics to evaluate the network performance: 1) probability of successful transmission while minimizing the interference, and 2) performance of distributed FL model in terms of accuracy and loss.
arXiv Detail & Related papers (2023-03-29T16:36:42Z) - Digital Over-the-Air Federated Learning in Multi-Antenna Systems [30.137208705209627]
We study the performance optimization of federated learning (FL) over a realistic wireless communication system with digital modulation and over-the-air computation (AirComp)
We propose a modified federated averaging (FedAvg) algorithm that combines digital modulation with AirComp to mitigate wireless fading while ensuring the communication efficiency.
An artificial neural network (ANN) is used to estimate the local FL models of all devices and adjust the beamforming matrices at the PS for future model transmission.
arXiv Detail & Related papers (2023-02-04T07:26:06Z) - CFLIT: Coexisting Federated Learning and Information Transfer [18.30671838758503]
We study the coexistence of over-the-air FL and traditional information transfer (IT) in a mobile edge network.
We propose a coexisting federated learning and information transfer (CFLIT) communication framework, where the FL and IT devices share the wireless spectrum in an OFDM system.
arXiv Detail & Related papers (2022-07-26T13:17:28Z) - Communication and Energy Efficient Slimmable Federated Learning via
Superposition Coding and Successive Decoding [55.58665303852148]
Federated learning (FL) has a great potential in exploiting private data by exchanging locally trained models instead of their raw data.
We propose a novel energy and communication efficient FL framework, coined SlimFL.
We show that SlimFL can simultaneously train both $0.5$x and $1.0$x models with reasonable accuracy and convergence speed.
arXiv Detail & Related papers (2021-12-05T13:35:26Z) - Joint Superposition Coding and Training for Federated Learning over
Multi-Width Neural Networks [52.93232352968347]
This paper aims to integrate two synergetic technologies, federated learning (FL) and width-adjustable slimmable neural network (SNN)
FL preserves data privacy by exchanging the locally trained models of mobile devices. SNNs are however non-trivial, particularly under wireless connections with time-varying channel conditions.
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2021-12-05T11:17:17Z) - Bayesian Federated Learning over Wireless Networks [87.37301441859925]
Federated learning is a privacy-preserving and distributed training method using heterogeneous data sets stored at local devices.
This paper presents an efficient modified BFL algorithm called scalableBFL (SBFL)
arXiv Detail & Related papers (2020-12-31T07:32:44Z) - Delay Minimization for Federated Learning Over Wireless Communication
Networks [172.42768672943365]
The problem of delay computation for federated learning (FL) over wireless communication networks is investigated.
A bisection search algorithm is proposed to obtain the optimal solution.
Simulation results show that the proposed algorithm can reduce delay by up to 27.3% compared to conventional FL methods.
arXiv Detail & Related papers (2020-07-05T19:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.