Quantized Federated Learning under Transmission Delay and Outage
Constraints
- URL: http://arxiv.org/abs/2106.09397v1
- Date: Thu, 17 Jun 2021 11:29:12 GMT
- Title: Quantized Federated Learning under Transmission Delay and Outage
Constraints
- Authors: Yanmeng Wang, Yanqing Xu, Qingjiang Shi, Tsung-Hui Chang
- Abstract summary: Federated learning is a viable distributed learning paradigm which trains a machine learning model collaboratively with massive mobile devices in the wireless edge.
In practical systems with limited radio resources, transmission of a large number of model parameters inevitably suffers from quantization errors (QE) and transmission outage (TO)
We propose a robust FL scheme, named FedTOE, which performs joint allocation of wireless resources and quantization bits across the clients to minimize the QE while making the clients have the same TO probability.
- Score: 30.892724364965005
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) has been recognized as a viable distributed learning
paradigm which trains a machine learning model collaboratively with massive
mobile devices in the wireless edge while protecting user privacy. Although
various communication schemes have been proposed to expedite the FL process,
most of them have assumed ideal wireless channels which provide reliable and
lossless communication links between the server and mobile clients.
Unfortunately, in practical systems with limited radio resources such as
constraint on the training latency and constraints on the transmission power
and bandwidth, transmission of a large number of model parameters inevitably
suffers from quantization errors (QE) and transmission outage (TO). In this
paper, we consider such non-ideal wireless channels, and carry out the first
analysis showing that the FL convergence can be severely jeopardized by TO and
QE, but intriguingly can be alleviated if the clients have uniform outage
probabilities. These insightful results motivate us to propose a robust FL
scheme, named FedTOE, which performs joint allocation of wireless resources and
quantization bits across the clients to minimize the QE while making the
clients have the same TO probability. Extensive experimental results are
presented to show the superior performance of FedTOE for a deep learning-based
classification task with transmission latency constraints.
Related papers
- Towards Resource-Efficient Federated Learning in Industrial IoT for Multivariate Time Series Analysis [50.18156030818883]
Anomaly and missing data constitute a thorny problem in industrial applications.
Deep learning enabled anomaly detection has emerged as a critical direction.
The data collected in edge devices contain user privacy.
arXiv Detail & Related papers (2024-11-06T15:38:31Z) - Digital Twin-Assisted Federated Learning with Blockchain in Multi-tier Computing Systems [67.14406100332671]
In Industry 4.0 systems, resource-constrained edge devices engage in frequent data interactions.
This paper proposes a digital twin (DT) and federated digital twin (FL) scheme.
The efficacy of our proposed cooperative interference-based FL process has been verified through numerical analysis.
arXiv Detail & Related papers (2024-11-04T17:48:02Z) - A SER-based Device Selection Mechanism in Multi-bits Quantization Federated Learning [6.922030110539386]
This paper analyze the influence of wireless communication on federated learning (FL) through symbol error rate (SER)
In FL system, non-orthogonal multiple access (NOMA) can be used as the basic communication framework to reduce the communication congestion and interference caused by multiple users.
The gradient parameters are quantized into multiple bits to retain more gradient information to the maximum extent and to improve the tolerance of transmission errors.
arXiv Detail & Related papers (2024-04-20T06:27:01Z) - Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - FLSTRA: Federated Learning in Stratosphere [22.313423693397556]
A high altitude platform station facilitates a number of terrestrial clients to collaboratively learn a global model without the training data.
We develop a joint client selection and resource allocation algorithm for uplink and downlink to minimize the FL delay.
Second, we propose a communication and resource-aware algorithm to achieve the target FL accuracy while deriving an upper bound for its convergence.
arXiv Detail & Related papers (2023-02-01T00:52:55Z) - CFLIT: Coexisting Federated Learning and Information Transfer [18.30671838758503]
We study the coexistence of over-the-air FL and traditional information transfer (IT) in a mobile edge network.
We propose a coexisting federated learning and information transfer (CFLIT) communication framework, where the FL and IT devices share the wireless spectrum in an OFDM system.
arXiv Detail & Related papers (2022-07-26T13:17:28Z) - Over-the-Air Federated Learning with Retransmissions (Extended Version) [21.37147806100865]
We study the impact of estimation errors on the convergence of Federated Learning (FL) over resource-constrained wireless networks.
We propose retransmissions as a method to improve FL convergence over resource-constrained wireless networks.
arXiv Detail & Related papers (2021-11-19T15:17:15Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Low-Latency Federated Learning over Wireless Channels with Differential
Privacy [142.5983499872664]
In federated learning (FL), model training is distributed over clients and local models are aggregated by a central server.
In this paper, we aim to minimize FL training delay over wireless channels, constrained by overall training performance as well as each client's differential privacy (DP) requirement.
arXiv Detail & Related papers (2021-06-20T13:51:18Z) - Harnessing Wireless Channels for Scalable and Privacy-Preserving
Federated Learning [56.94644428312295]
Wireless connectivity is instrumental in enabling federated learning (FL)
Channel randomnessperturbs each worker inversions model update while multiple workers updates incur significant interference on bandwidth.
In A-FADMM, all workers upload their model updates to the parameter server using a single channel via analog transmissions.
This not only saves communication bandwidth, but also hides each worker's exact model update trajectory from any eavesdropper.
arXiv Detail & Related papers (2020-07-03T16:31:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.