Communication and Energy Efficient Slimmable Federated Learning via
Superposition Coding and Successive Decoding
- URL: http://arxiv.org/abs/2112.03267v1
- Date: Sun, 5 Dec 2021 13:35:26 GMT
- Title: Communication and Energy Efficient Slimmable Federated Learning via
Superposition Coding and Successive Decoding
- Authors: Hankyul Baek, Won Joon Yun, Soyi Jung, Jihong Park, Mingyue Ji,
Joongheon Kim, Mehdi Bennis
- Abstract summary: Federated learning (FL) has a great potential in exploiting private data by exchanging locally trained models instead of their raw data.
We propose a novel energy and communication efficient FL framework, coined SlimFL.
We show that SlimFL can simultaneously train both $0.5$x and $1.0$x models with reasonable accuracy and convergence speed.
- Score: 55.58665303852148
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Mobile devices are indispensable sources of big data. Federated learning (FL)
has a great potential in exploiting these private data by exchanging locally
trained models instead of their raw data. However, mobile devices are often
energy limited and wirelessly connected, and FL cannot cope flexibly with their
heterogeneous and time-varying energy capacity and communication throughput,
limiting the adoption. Motivated by these issues, we propose a novel energy and
communication efficient FL framework, coined SlimFL. To resolve the
heterogeneous energy capacity problem, each device in SlimFL runs a
width-adjustable slimmable neural network (SNN). To address the heterogeneous
communication throughput problem, each full-width (1.0x) SNN model and its
half-width ($0.5$x) model are superposition-coded before transmission, and
successively decoded after reception as the 0.5x or $1.0$x model depending on
the channel quality. Simulation results show that SlimFL can simultaneously
train both $0.5$x and $1.0$x models with reasonable accuracy and convergence
speed, compared to its vanilla FL counterpart separately training the two
models using $2$x more communication resources. Surprisingly, SlimFL achieves
even higher accuracy with lower energy footprints than vanilla FL for poor
channels and non-IID data distributions, under which vanilla FL converges
slowly.
Related papers
- Joint Energy and Latency Optimization in Federated Learning over Cell-Free Massive MIMO Networks [36.6868658064971]
Federated learning (FL) is a distributed learning paradigm wherein users exchange FL models with a server instead of raw datasets.
Cell-free massive multiple-input multiple-output(CFmMIMO) is a promising architecture for implementing FL because it serves many users on the same time/frequency resources.
We propose an uplink power allocation scheme in FL over CFmMIMO by considering the effect of each user's power on the energy and latency of other users.
arXiv Detail & Related papers (2024-04-28T19:24:58Z) - Have Your Cake and Eat It Too: Toward Efficient and Accurate Split Federated Learning [25.47111107054497]
Split Federated Learning (SFL) is promising in AIoT systems.
SFL suffers from the challenges of low inference accuracy and low efficiency.
This paper presents a novel SFL approach, named Sliding Split Federated Learning (S$2$FL)
arXiv Detail & Related papers (2023-11-22T05:09:50Z) - Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - Improving the Model Consistency of Decentralized Federated Learning [68.2795379609854]
Federated Learning (FL) discards the central server and each client only communicates with its neighbors in a decentralized communication network.
Existing DFL suffers from inconsistency among local clients, which results in inferior compared to FLFL.
We propose DFedSAMMGS, where $1lambda$ is the spectral gossip matrix and $Q$ is the number of sparse data gaps.
arXiv Detail & Related papers (2023-02-08T14:37:34Z) - SlimFL: Federated Learning with Superposition Coding over Slimmable
Neural Networks [56.68149211499535]
Federated learning (FL) is a key enabler for efficient communication and computing leveraging devices' distributed computing capabilities.
This paper proposes a novel learning framework by integrating FL and width-adjustable slimmable neural networks (SNNs)
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2022-03-26T15:06:13Z) - FedLGA: Towards System-Heterogeneity of Federated Learning via Local
Gradient Approximation [21.63719641718363]
We formalize the system-heterogeneous FL problem and propose a new algorithm, called FedLGA, which addresses this problem by bridging the divergence local model updates via epoch approximation.
The results of comprehensive experiments on multiple datasets show that FedLGA outperforms current FL benchmarks against the system-heterogeneity.
arXiv Detail & Related papers (2021-12-22T16:05:09Z) - Joint Superposition Coding and Training for Federated Learning over
Multi-Width Neural Networks [52.93232352968347]
This paper aims to integrate two synergetic technologies, federated learning (FL) and width-adjustable slimmable neural network (SNN)
FL preserves data privacy by exchanging the locally trained models of mobile devices. SNNs are however non-trivial, particularly under wireless connections with time-varying channel conditions.
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2021-12-05T11:17:17Z) - Over-the-Air Federated Learning from Heterogeneous Data [107.05618009955094]
Federated learning (FL) is a framework for distributed learning of centralized models.
We develop a Convergent OTA FL (COTAF) algorithm which enhances the common local gradient descent (SGD) FL algorithm.
We numerically show that the precoding induced by COTAF notably improves the convergence rate and the accuracy of models trained via OTA FL.
arXiv Detail & Related papers (2020-09-27T08:28:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.