On the Convergence Time of Federated Learning Over Wireless Networks
Under Imperfect CSI
- URL: http://arxiv.org/abs/2104.00331v1
- Date: Thu, 1 Apr 2021 08:30:45 GMT
- Title: On the Convergence Time of Federated Learning Over Wireless Networks
Under Imperfect CSI
- Authors: Francesco Pase, Marco Giordani, Michele Zorzi
- Abstract summary: We propose a training process that takes channel statistics as a bias to minimize the convergence time under imperfect CSI.
We also examine the trade-off between number of clients involved in the training process and model accuracy as a function of different fading regimes.
- Score: 28.782485580296374
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) has recently emerged as an attractive decentralized
solution for wireless networks to collaboratively train a shared model while
keeping data localized. As a general approach, existing FL methods tend to
assume perfect knowledge of the Channel State Information (CSI) during the
training phase, which may not be easy to acquire in case of fast fading
channels. Moreover, literature analyses either consider a fixed number of
clients participating in the training of the federated model, or simply assume
that all clients operate at the maximum achievable rate to transmit model data.
In this paper, we fill these gaps by proposing a training process that takes
channel statistics as a bias to minimize the convergence time under imperfect
CSI. Numerical experiments demonstrate that it is possible to reduce the
training time by neglecting model updates from clients that cannot sustain a
minimum predefined transmission rate. We also examine the trade-off between
number of clients involved in the training process and model accuracy as a
function of different fading regimes.
Related papers
- Fast-FedUL: A Training-Free Federated Unlearning with Provable Skew Resilience [26.647028483763137]
We introduce Fast-FedUL, a tailored unlearning method for Federated Learning (FL)
We develop an algorithm to systematically remove the impact of the target client from the trained model.
Experimental results indicate that Fast-FedUL effectively removes almost all traces of the target client, while retaining the knowledge of untargeted clients.
arXiv Detail & Related papers (2024-05-28T10:51:38Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - SalientGrads: Sparse Models for Communication Efficient and Data Aware
Distributed Federated Training [1.0413504599164103]
Federated learning (FL) enables the training of a model leveraging decentralized data in client sites while preserving privacy by not collecting data.
One of the significant challenges of FL is limited computation and low communication bandwidth in resource limited edge client nodes.
We propose Salient Grads, which simplifies the process of sparse training by choosing a data aware subnetwork before training.
arXiv Detail & Related papers (2023-04-15T06:46:37Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Joint Client Scheduling and Resource Allocation under Channel
Uncertainty in Federated Learning [47.97586668316476]
Federated learning (FL) over wireless networks depends on the reliability of the client-server connectivity and clients' local computation capabilities.
In this article, we investigate the problem of client scheduling and resource block (RB) allocation to enhance the performance of model training using FL.
A proposed method reduces the gap of the training accuracy loss by up to 40.7% compared to state-of-theart client scheduling and RB allocation methods.
arXiv Detail & Related papers (2021-06-12T15:18:48Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - Communication-Efficient Federated Learning with Dual-Side Low-Rank
Compression [8.353152693578151]
Federated learning (FL) is a promising and powerful approach for training deep learning models without sharing the raw data of clients.
We propose a new training method, referred to as federated learning with dual-side low-rank compression (FedDLR)
We show that FedDLR outperforms the state-of-the-art solutions in terms of both the communication and efficiency.
arXiv Detail & Related papers (2021-04-26T09:13:31Z) - Time-Correlated Sparsification for Communication-Efficient Federated
Learning [6.746400031322727]
Federated learning (FL) enables multiple clients to collaboratively train a shared model without disclosing their local datasets.
We introduce a novel time-correlated sparsification scheme, which seeks a certain correlation between the sparse representations used at consecutive iterations in FL.
We show that TCS can achieve centralized training accuracy with 100 times sparsification, and up to 2000 times reduction in the communication load when employed together with quantization.
arXiv Detail & Related papers (2021-01-21T20:15:55Z) - Straggler-Resilient Federated Learning: Leveraging the Interplay Between
Statistical Accuracy and System Heterogeneity [57.275753974812666]
Federated learning involves learning from data samples distributed across a network of clients while the data remains local.
In this paper, we propose a novel straggler-resilient federated learning method that incorporates statistical characteristics of the clients' data to adaptively select the clients in order to speed up the learning procedure.
arXiv Detail & Related papers (2020-12-28T19:21:14Z) - CosSGD: Nonlinear Quantization for Communication-efficient Federated
Learning [62.65937719264881]
Federated learning facilitates learning across clients without transferring local data on these clients to a central server.
We propose a nonlinear quantization for compressed gradient descent, which can be easily utilized in federated learning.
Our system significantly reduces the communication cost by up to three orders of magnitude, while maintaining convergence and accuracy of the training process.
arXiv Detail & Related papers (2020-12-15T12:20:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.