Joint Superposition Coding and Training for Federated Learning over
Multi-Width Neural Networks
- URL: http://arxiv.org/abs/2112.02543v1
- Date: Sun, 5 Dec 2021 11:17:17 GMT
- Title: Joint Superposition Coding and Training for Federated Learning over
Multi-Width Neural Networks
- Authors: Hankyul Baek, Won Joon Yun, Yunseok Kwak, Soyi Jung, Mingyue Ji, Mehdi
Bennis, Jihong Park, Joongheon Kim
- Abstract summary: This paper aims to integrate two synergetic technologies, federated learning (FL) and width-adjustable slimmable neural network (SNN)
FL preserves data privacy by exchanging the locally trained models of mobile devices. SNNs are however non-trivial, particularly under wireless connections with time-varying channel conditions.
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
- Score: 52.93232352968347
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper aims to integrate two synergetic technologies, federated learning
(FL) and width-adjustable slimmable neural network (SNN) architectures. FL
preserves data privacy by exchanging the locally trained models of mobile
devices. By adopting SNNs as local models, FL can flexibly cope with the
time-varying energy capacities of mobile devices. Combining FL and SNNs is
however non-trivial, particularly under wireless connections with time-varying
channel conditions. Furthermore, existing multi-width SNN training algorithms
are sensitive to the data distributions across devices, so are ill-suited to
FL. Motivated by this, we propose a communication and energy-efficient
SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for
global model aggregation and superposition training (ST) for updating local
models. By applying SC, SlimFL exchanges the superposition of multiple width
configurations that are decoded as many as possible for a given communication
throughput. Leveraging ST, SlimFL aligns the forward propagation of different
width configurations, while avoiding the inter-width interference during
backpropagation. We formally prove the convergence of SlimFL. The result
reveals that SlimFL is not only communication-efficient but also can counteract
non-IID data distributions and poor channel conditions, which is also
corroborated by simulations.
Related papers
- Hyperdimensional Computing Empowered Federated Foundation Model over Wireless Networks for Metaverse [56.384390765357004]
We propose an integrated federated split learning and hyperdimensional computing framework for emerging foundation models.
This novel approach reduces communication costs, computation load, and privacy risks, making it suitable for resource-constrained edge devices in the Metaverse.
arXiv Detail & Related papers (2024-08-26T17:03:14Z) - Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - FLCC: Efficient Distributed Federated Learning on IoMT over CSMA/CA [0.0]
Federated Learning (FL) has emerged as a promising approach for privacy preservation.
This article investigates the performance of FL on an application that might be used to improve a remote healthcare system over ad hoc networks.
We present two metrics to evaluate the network performance: 1) probability of successful transmission while minimizing the interference, and 2) performance of distributed FL model in terms of accuracy and loss.
arXiv Detail & Related papers (2023-03-29T16:36:42Z) - Automated Federated Learning in Mobile Edge Networks -- Fast Adaptation
and Convergence [83.58839320635956]
Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner.
Recent FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets.
This paper addresses how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks.
arXiv Detail & Related papers (2023-03-23T02:42:10Z) - Digital Over-the-Air Federated Learning in Multi-Antenna Systems [30.137208705209627]
We study the performance optimization of federated learning (FL) over a realistic wireless communication system with digital modulation and over-the-air computation (AirComp)
We propose a modified federated averaging (FedAvg) algorithm that combines digital modulation with AirComp to mitigate wireless fading while ensuring the communication efficiency.
An artificial neural network (ANN) is used to estimate the local FL models of all devices and adjust the beamforming matrices at the PS for future model transmission.
arXiv Detail & Related papers (2023-02-04T07:26:06Z) - CFLIT: Coexisting Federated Learning and Information Transfer [18.30671838758503]
We study the coexistence of over-the-air FL and traditional information transfer (IT) in a mobile edge network.
We propose a coexisting federated learning and information transfer (CFLIT) communication framework, where the FL and IT devices share the wireless spectrum in an OFDM system.
arXiv Detail & Related papers (2022-07-26T13:17:28Z) - SlimFL: Federated Learning with Superposition Coding over Slimmable
Neural Networks [56.68149211499535]
Federated learning (FL) is a key enabler for efficient communication and computing leveraging devices' distributed computing capabilities.
This paper proposes a novel learning framework by integrating FL and width-adjustable slimmable neural networks (SNNs)
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2022-03-26T15:06:13Z) - Communication and Energy Efficient Slimmable Federated Learning via
Superposition Coding and Successive Decoding [55.58665303852148]
Federated learning (FL) has a great potential in exploiting private data by exchanging locally trained models instead of their raw data.
We propose a novel energy and communication efficient FL framework, coined SlimFL.
We show that SlimFL can simultaneously train both $0.5$x and $1.0$x models with reasonable accuracy and convergence speed.
arXiv Detail & Related papers (2021-12-05T13:35:26Z) - Communication-Efficient Federated Learning with Binary Neural Networks [15.614120327271557]
Federated learning (FL) is a privacy-preserving machine learning setting.
FL involves a frequent exchange of the parameters between all the clients and the server that coordinates the training.
In this paper, we consider training the binary neural networks (BNN) in the FL setting instead of the typical real-valued neural networks.
arXiv Detail & Related papers (2021-10-05T15:59:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.