CFLIT: Coexisting Federated Learning and Information Transfer
- URL: http://arxiv.org/abs/2207.12884v3
- Date: Wed, 5 Apr 2023 07:52:12 GMT
- Title: CFLIT: Coexisting Federated Learning and Information Transfer
- Authors: Zehong Lin, Hang Liu, Ying-Jun Angela Zhang
- Abstract summary: We study the coexistence of over-the-air FL and traditional information transfer (IT) in a mobile edge network.
We propose a coexisting federated learning and information transfer (CFLIT) communication framework, where the FL and IT devices share the wireless spectrum in an OFDM system.
- Score: 18.30671838758503
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Future wireless networks are expected to support diverse mobile services,
including artificial intelligence (AI) services and ubiquitous data
transmissions. Federated learning (FL), as a revolutionary learning approach,
enables collaborative AI model training across distributed mobile edge devices.
By exploiting the superposition property of multiple-access channels,
over-the-air computation allows concurrent model uploading from massive devices
over the same radio resources, and thus significantly reduces the communication
cost of FL. In this paper, we study the coexistence of over-the-air FL and
traditional information transfer (IT) in a mobile edge network. We propose a
coexisting federated learning and information transfer (CFLIT) communication
framework, where the FL and IT devices share the wireless spectrum in an OFDM
system. Under this framework, we aim to maximize the IT data rate and guarantee
a given FL convergence performance by optimizing the long-term radio resource
allocation. A key challenge that limits the spectrum efficiency of the
coexisting system lies in the large overhead incurred by frequent communication
between the server and edge devices for FL model aggregation. To address the
challenge, we rigorously analyze the impact of the computation-to-communication
ratio on the convergence of over-the-air FL in wireless fading channels. The
analysis reveals the existence of an optimal computation-to-communication ratio
that minimizes the amount of radio resources needed for over-the-air FL to
converge to a given error tolerance. Based on the analysis, we propose a
low-complexity online algorithm to jointly optimize the radio resource
allocation for both the FL devices and IT devices. Extensive numerical
simulations verify the superior performance of the proposed design for the
coexistence of FL and IT devices in wireless cellular systems.
Related papers
- Analysis and Optimization of Wireless Federated Learning with Data
Heterogeneity [72.85248553787538]
This paper focuses on performance analysis and optimization for wireless FL, considering data heterogeneity, combined with wireless resource allocation.
We formulate the loss function minimization problem, under constraints on long-term energy consumption and latency, and jointly optimize client scheduling, resource allocation, and the number of local training epochs (CRE)
Experiments on real-world datasets demonstrate that the proposed algorithm outperforms other benchmarks in terms of the learning accuracy and energy consumption.
arXiv Detail & Related papers (2023-08-04T04:18:01Z) - Vertical Federated Learning over Cloud-RAN: Convergence Analysis and
System Optimization [82.12796238714589]
We propose a novel cloud radio access network (Cloud-RAN) based vertical FL system to enable fast and accurate model aggregation.
We characterize the convergence behavior of the vertical FL algorithm considering both uplink and downlink transmissions.
We establish a system optimization framework by joint transceiver and fronthaul quantization design, for which successive convex approximation and alternate convex search based system optimization algorithms are developed.
arXiv Detail & Related papers (2023-05-04T09:26:03Z) - Energy and Spectrum Efficient Federated Learning via High-Precision
Over-the-Air Computation [26.499025986273832]
Federated learning (FL) enables mobile devices to collaboratively learn a shared prediction model while keeping data locally.
There are two major research challenges to practically deploy FL over mobile devices.
We propose a novel multi-bit over-the-air computation (M-AirComp) approach for spectrum-efficient aggregation of local model updates in FL.
arXiv Detail & Related papers (2022-08-15T14:47:21Z) - Resource Allocation for Compression-aided Federated Learning with High
Distortion Rate [3.7530276852356645]
We formulate an optimization-aided FL problem between the distortion rate, number of participating IoT devices, and convergence rate.
By actively controlling participating IoT devices, we can avoid the training divergence of compression-aided FL while maintaining the communication efficiency.
arXiv Detail & Related papers (2022-06-02T05:00:37Z) - SlimFL: Federated Learning with Superposition Coding over Slimmable
Neural Networks [56.68149211499535]
Federated learning (FL) is a key enabler for efficient communication and computing leveraging devices' distributed computing capabilities.
This paper proposes a novel learning framework by integrating FL and width-adjustable slimmable neural networks (SNNs)
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2022-03-26T15:06:13Z) - Joint Superposition Coding and Training for Federated Learning over
Multi-Width Neural Networks [52.93232352968347]
This paper aims to integrate two synergetic technologies, federated learning (FL) and width-adjustable slimmable neural network (SNN)
FL preserves data privacy by exchanging the locally trained models of mobile devices. SNNs are however non-trivial, particularly under wireless connections with time-varying channel conditions.
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2021-12-05T11:17:17Z) - Over-the-Air Federated Learning with Retransmissions (Extended Version) [21.37147806100865]
We study the impact of estimation errors on the convergence of Federated Learning (FL) over resource-constrained wireless networks.
We propose retransmissions as a method to improve FL convergence over resource-constrained wireless networks.
arXiv Detail & Related papers (2021-11-19T15:17:15Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - To Talk or to Work: Flexible Communication Compression for Energy
Efficient Federated Learning over Heterogeneous Mobile Edge Devices [78.38046945665538]
federated learning (FL) over massive mobile edge devices opens new horizons for numerous intelligent mobile applications.
FL imposes huge communication and computation burdens on participating devices due to periodical global synchronization and continuous local training.
We develop a convergence-guaranteed FL algorithm enabling flexible communication compression.
arXiv Detail & Related papers (2020-12-22T02:54:18Z) - Reconfigurable Intelligent Surface Enabled Federated Learning: A Unified
Communication-Learning Design Approach [30.1988598440727]
We develop a unified communication-learning optimization problem to jointly optimize device selection, over-the-air transceiver design, and RIS configuration.
Numerical experiments show that the proposed design achieves substantial learning accuracy improvement compared with the state-of-the-art approaches.
arXiv Detail & Related papers (2020-11-20T08:54:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.