Convergence Analysis and System Design for Federated Learning over
Wireless Networks
- URL: http://arxiv.org/abs/2105.00872v1
- Date: Fri, 30 Apr 2021 02:33:29 GMT
- Title: Convergence Analysis and System Design for Federated Learning over
Wireless Networks
- Authors: Shuo Wan, Jiaxun Lu, Pingyi Fan, Yunfeng Shao, Chenghui Peng and
Khaled B. letaief
- Abstract summary: Federated learning (FL) has emerged as an important and promising learning scheme in IoT.
FL training requires frequent model exchange, which is largely affected by the wireless communication network.
In this paper, we analyze the convergence rate of FL training considering the joint impact of communication network and training settings.
- Score: 16.978276697446724
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) has recently emerged as an important and promising
learning scheme in IoT, enabling devices to jointly learn a model without
sharing their raw data sets. However, as the training data in FL is not
collected and stored centrally, FL training requires frequent model exchange,
which is largely affected by the wireless communication network. Therein,
limited bandwidth and random package loss restrict interactions in training.
Meanwhile, the insufficient message synchronization among distributed clients
could also affect FL convergence. In this paper, we analyze the convergence
rate of FL training considering the joint impact of communication network and
training settings. Further by considering the training costs in terms of time
and power, the optimal scheduling problems for communication networks are
formulated. The developed theoretical results can be used to assist the system
parameter selections and explain the principle of how the wireless
communication system could influence the distributed training process and
network scheduling.
Related papers
- Overlay-based Decentralized Federated Learning in Bandwidth-limited Networks [3.9162099309900835]
Decentralized federated learning (DFL) has the promise of boosting the deployment of artificial intelligence (AI) by directly learning across distributed agents without centralized coordination.
Most existing solutions were based on the simplistic assumption that neighboring agents are physically adjacent in the underlying communication network.
We jointly design the communication demands and the communication schedule for overlay-based DFL in bandwidth-limited networks without requiring explicit cooperation from the underlying network.
arXiv Detail & Related papers (2024-08-08T18:05:11Z) - WHALE-FL: Wireless and Heterogeneity Aware Latency Efficient Federated Learning over Mobile Devices via Adaptive Subnetwork Scheduling [17.029433544096257]
We develop a wireless and aware latency efficient FL (WHALE-FL) approach to accelerate FL training through adaptive subnetwork scheduling.
Our evaluation shows that, compared with peer designs, WHALE-FL effectively accelerates FL training without sacrificing learning accuracy.
arXiv Detail & Related papers (2024-05-01T22:01:40Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Scheduling and Aggregation Design for Asynchronous Federated Learning
over Wireless Networks [56.91063444859008]
Federated Learning (FL) is a collaborative machine learning framework that combines on-device training and server-based aggregation.
We propose an asynchronous FL design with periodic aggregation to tackle the straggler issue in FL systems.
We show that an age-aware'' aggregation weighting design can significantly improve the learning performance in an asynchronous FL setting.
arXiv Detail & Related papers (2022-12-14T17:33:01Z) - Performance Optimization for Variable Bitwidth Federated Learning in
Wireless Networks [103.22651843174471]
This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization.
In the proposed bitwidth FL scheme, edge devices train and transmit quantized versions of their local FL model parameters to a coordinating server, which aggregates them into a quantized global model and synchronizes the devices.
We show that the FL training process can be described as a Markov decision process and propose a model-based reinforcement learning (RL) method to optimize action selection over iterations.
arXiv Detail & Related papers (2022-09-21T08:52:51Z) - Predictive GAN-powered Multi-Objective Optimization for Hybrid Federated
Split Learning [56.125720497163684]
We propose a hybrid federated split learning framework in wireless networks.
We design a parallel computing scheme for model splitting without label sharing, and theoretically analyze the influence of the delayed gradient caused by the scheme on the convergence speed.
arXiv Detail & Related papers (2022-09-02T10:29:56Z) - CFLIT: Coexisting Federated Learning and Information Transfer [18.30671838758503]
We study the coexistence of over-the-air FL and traditional information transfer (IT) in a mobile edge network.
We propose a coexisting federated learning and information transfer (CFLIT) communication framework, where the FL and IT devices share the wireless spectrum in an OFDM system.
arXiv Detail & Related papers (2022-07-26T13:17:28Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Delay Minimization for Federated Learning Over Wireless Communication
Networks [172.42768672943365]
The problem of delay computation for federated learning (FL) over wireless communication networks is investigated.
A bisection search algorithm is proposed to obtain the optimal solution.
Simulation results show that the proposed algorithm can reduce delay by up to 27.3% compared to conventional FL methods.
arXiv Detail & Related papers (2020-07-05T19:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.