How Robust is Federated Learning to Communication Error? A Comparison
Study Between Uplink and Downlink Channels
- URL: http://arxiv.org/abs/2310.16652v2
- Date: Fri, 12 Jan 2024 19:30:15 GMT
- Title: How Robust is Federated Learning to Communication Error? A Comparison
Study Between Uplink and Downlink Channels
- Authors: Linping Qu, Shenghui Song, Chi-Ying Tsui, and Yuyi Mao
- Abstract summary: This paper investigates the robustness of federated learning to the uplink and downlink communication error.
It is shown that the uplink communication in FL can tolerate a higher bit error rate (BER) than downlink communication.
- Score: 13.885735785986164
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Because of its privacy-preserving capability, federated learning (FL) has
attracted significant attention from both academia and industry. However, when
being implemented over wireless networks, it is not clear how much
communication error can be tolerated by FL. This paper investigates the
robustness of FL to the uplink and downlink communication error. Our
theoretical analysis reveals that the robustness depends on two critical
parameters, namely the number of clients and the numerical range of model
parameters. It is also shown that the uplink communication in FL can tolerate a
higher bit error rate (BER) than downlink communication, and this difference is
quantified by a proposed formula. The findings and theoretical analyses are
further validated by extensive experiments.
Related papers
- R-SFLLM: Jamming Resilient Framework for Split Federated Learning with Large Language Models [83.77114091471822]
Split federated learning (SFL) is a compute-efficient paradigm in distributed machine learning (ML)
A challenge in SFL, particularly when deployed over wireless channels, is the susceptibility of transmitted model parameters to adversarial jamming.
This is particularly pronounced for word embedding parameters in large language models (LLMs), which are crucial for language understanding.
A physical layer framework is developed for resilient SFL with LLMs (R-SFLLM) over wireless networks.
arXiv Detail & Related papers (2024-07-16T12:21:29Z) - SpaFL: Communication-Efficient Federated Learning with Sparse Models and Low computational Overhead [75.87007729801304]
SpaFL: a communication-efficient FL framework is proposed to optimize sparse model structures with low computational overhead.
Experiments show that SpaFL improves accuracy while requiring much less communication and computing resources compared to sparse baselines.
arXiv Detail & Related papers (2024-06-01T13:10:35Z) - Digital versus Analog Transmissions for Federated Learning over Wireless
Networks [91.20926827568053]
We compare two effective communication schemes for wireless federated learning (FL) over resource-constrained networks.
We first examine both digital and analog transmission methods, together with a unified and fair comparison scheme under practical constraints.
A universal convergence analysis under various imperfections is established for FL performance evaluation in wireless networks.
arXiv Detail & Related papers (2024-02-15T01:50:46Z) - Improved Convergence Analysis and SNR Control Strategies for Federated
Learning in the Presence of Noise [10.685862129925729]
We propose an improved convergence analysis technique that characterizes the distributed learning paradigm with imperfect/noisy uplink and downlink communications.
Such imperfect communication scenarios arise in the practical deployment of FL in emerging communication systems and protocols.
arXiv Detail & Related papers (2023-07-14T15:35:57Z) - FedDec: Peer-to-peer Aided Federated Learning [15.952956981784219]
Federated learning (FL) has enabled training machine learning models exploiting the data of multiple agents without compromising privacy.
FL is known to be vulnerable to data heterogeneity, partial device participation, and infrequent communication with the server.
We present FedDec, an algorithm that interleaves peer-to-peer communication and parameter averaging between the local gradient updates of FL.
arXiv Detail & Related papers (2023-06-11T16:30:57Z) - Random Orthogonalization for Federated Learning in Massive MIMO Systems [85.71432283670114]
We propose a novel communication design for federated learning (FL) in a massive multiple-input and multiple-output (MIMO) wireless system.
Key novelty of random orthogonalization comes from the tight coupling of FL and two unique characteristics of massive MIMO -- channel hardening and favorable propagation.
We extend this principle to the downlink communication phase and develop a simple but highly effective model broadcast method for FL.
arXiv Detail & Related papers (2022-10-18T14:17:10Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Fast Federated Learning by Balancing Communication Trade-Offs [9.89867121050673]
Federated Learning (FL) has recently received a lot of attention for large-scale privacy-preserving machine learning.
High communication overheads due to frequent gradient transmissions decelerate FL.
We propose an enhanced FL scheme, namely Fast FL (FFL), that jointly and dynamically adjusts the two variables to minimize the learning error.
arXiv Detail & Related papers (2021-05-23T21:55:14Z) - Federated Learning over Noisy Channels: Convergence Analysis and Design
Examples [17.89437720094451]
Federated Learning (FL) works when both uplink and downlink communications have errors.
How much communication noise can FL handle and what is its impact to the learning performance?
This work is devoted to answering these practically important questions by explicitly incorporating both uplink and downlink noisy channels in the FL pipeline.
arXiv Detail & Related papers (2021-01-06T18:57:39Z) - Delay Minimization for Federated Learning Over Wireless Communication
Networks [172.42768672943365]
The problem of delay computation for federated learning (FL) over wireless communication networks is investigated.
A bisection search algorithm is proposed to obtain the optimal solution.
Simulation results show that the proposed algorithm can reduce delay by up to 27.3% compared to conventional FL methods.
arXiv Detail & Related papers (2020-07-05T19:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.