Delay Minimization for Federated Learning Over Wireless Communication
Networks
- URL: http://arxiv.org/abs/2007.03462v1
- Date: Sun, 5 Jul 2020 19:00:07 GMT
- Title: Delay Minimization for Federated Learning Over Wireless Communication
Networks
- Authors: Zhaohui Yang and Mingzhe Chen and Walid Saad and Choong Seon Hong and
Mohammad Shikh-Bahaei and H. Vincent Poor and Shuguang Cui
- Abstract summary: The problem of delay computation for federated learning (FL) over wireless communication networks is investigated.
A bisection search algorithm is proposed to obtain the optimal solution.
Simulation results show that the proposed algorithm can reduce delay by up to 27.3% compared to conventional FL methods.
- Score: 172.42768672943365
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, the problem of delay minimization for federated learning (FL)
over wireless communication networks is investigated. In the considered model,
each user exploits limited local computational resources to train a local FL
model with its collected data and, then, sends the trained FL model parameters
to a base station (BS) which aggregates the local FL models and broadcasts the
aggregated FL model back to all the users. Since FL involves learning model
exchanges between the users and the BS, both computation and communication
latencies are determined by the required learning accuracy level, which affects
the convergence rate of the FL algorithm. This joint learning and communication
problem is formulated as a delay minimization problem, where it is proved that
the objective function is a convex function of the learning accuracy. Then, a
bisection search algorithm is proposed to obtain the optimal solution.
Simulation results show that the proposed algorithm can reduce delay by up to
27.3% compared to conventional FL methods.
Related papers
- Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Automated Federated Learning in Mobile Edge Networks -- Fast Adaptation
and Convergence [83.58839320635956]
Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner.
Recent FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets.
This paper addresses how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks.
arXiv Detail & Related papers (2023-03-23T02:42:10Z) - Performance Optimization for Variable Bitwidth Federated Learning in
Wireless Networks [103.22651843174471]
This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization.
In the proposed bitwidth FL scheme, edge devices train and transmit quantized versions of their local FL model parameters to a coordinating server, which aggregates them into a quantized global model and synchronizes the devices.
We show that the FL training process can be described as a Markov decision process and propose a model-based reinforcement learning (RL) method to optimize action selection over iterations.
arXiv Detail & Related papers (2022-09-21T08:52:51Z) - Time-triggered Federated Learning over Wireless Networks [48.389824560183776]
We present a time-triggered FL algorithm (TT-Fed) over wireless networks.
Our proposed TT-Fed algorithm improves the converged test accuracy by up to 12.5% and 5%, respectively.
arXiv Detail & Related papers (2022-04-26T16:37:29Z) - Resource-Efficient and Delay-Aware Federated Learning Design under Edge
Heterogeneity [10.702853653891902]
Federated learning (FL) has emerged as a popular methodology for distributing machine learning across wireless edge devices.
In this work, we consider optimizing the tradeoff between model performance and resource utilization in FL.
Our proposed StoFedDelAv incorporates a localglobal model combiner into the FL computation step.
arXiv Detail & Related papers (2021-12-27T22:30:15Z) - Unit-Modulus Wireless Federated Learning Via Penalty Alternating
Minimization [64.76619508293966]
Wireless federated learning (FL) is an emerging machine learning paradigm that trains a global parametric model from distributed datasets via wireless communications.
This paper proposes a wireless FL framework, which uploads local model parameters and computes global model parameters via wireless communications.
arXiv Detail & Related papers (2021-08-31T08:19:54Z) - Joint Optimization of Communications and Federated Learning Over the Air [32.14738452396869]
Federated learning (FL) is an attractive paradigm for making use of rich distributed data while protecting data privacy.
In this paper, we study joint optimization of communications and FL based on analog aggregation transmission in realistic wireless networks.
arXiv Detail & Related papers (2021-04-08T03:38:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.