Convergence Time Optimization for Federated Learning over Wireless
Networks
- URL: http://arxiv.org/abs/2001.07845v2
- Date: Fri, 26 Mar 2021 13:28:33 GMT
- Title: Convergence Time Optimization for Federated Learning over Wireless
Networks
- Authors: Mingzhe Chen, H. Vincent Poor, Walid Saad, and Shuguang Cui
- Abstract summary: A wireless network is considered in which wireless users transmit their local FL models (trained using their locally collected data) to a base station (BS)
The BS, acting as a central controller, generates a global FL model using the received local FL models and broadcasts it back to all users.
Due to the limited number of resource blocks (RBs) in a wireless network, only a subset of users can be selected to transmit their local FL model parameters to the BS.
Since each user has unique training data samples, the BS prefers to include all local user FL models to generate a converged global FL model.
- Score: 160.82696473996566
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, the convergence time of federated learning (FL), when deployed
over a realistic wireless network, is studied. In particular, a wireless
network is considered in which wireless users transmit their local FL models
(trained using their locally collected data) to a base station (BS). The BS,
acting as a central controller, generates a global FL model using the received
local FL models and broadcasts it back to all users. Due to the limited number
of resource blocks (RBs) in a wireless network, only a subset of users can be
selected to transmit their local FL model parameters to the BS at each learning
step. Moreover, since each user has unique training data samples, the BS
prefers to include all local user FL models to generate a converged global FL
model. Hence, the FL performance and convergence time will be significantly
affected by the user selection scheme. Therefore, it is necessary to design an
appropriate user selection scheme that enables users of higher importance to be
selected more frequently. This joint learning, wireless resource allocation,
and user selection problem is formulated as an optimization problem whose goal
is to minimize the FL convergence time while optimizing the FL performance. To
solve this problem, a probabilistic user selection scheme is proposed such that
the BS is connected to the users whose local FL models have significant effects
on its global FL model with high probabilities. Given the user selection
policy, the uplink RB allocation can be determined. To further reduce the FL
convergence time, artificial neural networks (ANNs) are used to estimate the
local FL models of the users that are not allocated any RBs for local FL model
transmission at each given learning step, which enables the BS to enhance its
global FL model and improve the FL convergence speed and performance.
Related papers
- Joint Energy and Latency Optimization in Federated Learning over Cell-Free Massive MIMO Networks [36.6868658064971]
Federated learning (FL) is a distributed learning paradigm wherein users exchange FL models with a server instead of raw datasets.
Cell-free massive multiple-input multiple-output(CFmMIMO) is a promising architecture for implementing FL because it serves many users on the same time/frequency resources.
We propose an uplink power allocation scheme in FL over CFmMIMO by considering the effect of each user's power on the energy and latency of other users.
arXiv Detail & Related papers (2024-04-28T19:24:58Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Joint Age-based Client Selection and Resource Allocation for
Communication-Efficient Federated Learning over NOMA Networks [8.030674576024952]
In federated learning (FL), distributed clients can collaboratively train a shared global model while retaining their own training data locally.
In this paper, a joint optimization problem of client selection and resource allocation is formulated, aiming to minimize the total time consumption of each round in FL over a non-orthogonal multiple access (NOMA) enabled wireless network.
In addition, a server-side artificial neural network (ANN) is proposed to predict the FL models of clients who are not selected at each round to further improve FL performance.
arXiv Detail & Related papers (2023-04-18T13:58:16Z) - FLCC: Efficient Distributed Federated Learning on IoMT over CSMA/CA [0.0]
Federated Learning (FL) has emerged as a promising approach for privacy preservation.
This article investigates the performance of FL on an application that might be used to improve a remote healthcare system over ad hoc networks.
We present two metrics to evaluate the network performance: 1) probability of successful transmission while minimizing the interference, and 2) performance of distributed FL model in terms of accuracy and loss.
arXiv Detail & Related papers (2023-03-29T16:36:42Z) - Automated Federated Learning in Mobile Edge Networks -- Fast Adaptation
and Convergence [83.58839320635956]
Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner.
Recent FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets.
This paper addresses how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks.
arXiv Detail & Related papers (2023-03-23T02:42:10Z) - Performance Optimization for Variable Bitwidth Federated Learning in
Wireless Networks [103.22651843174471]
This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization.
In the proposed bitwidth FL scheme, edge devices train and transmit quantized versions of their local FL model parameters to a coordinating server, which aggregates them into a quantized global model and synchronizes the devices.
We show that the FL training process can be described as a Markov decision process and propose a model-based reinforcement learning (RL) method to optimize action selection over iterations.
arXiv Detail & Related papers (2022-09-21T08:52:51Z) - FedFog: Network-Aware Optimization of Federated Learning over Wireless
Fog-Cloud Systems [40.421253127588244]
Federated learning (FL) is capable of performing large distributed machine learning tasks across multiple edge users by periodically aggregating trained local parameters.
We first propose an efficient FL algorithm (called FedFog) to perform the local aggregation of gradient parameters at fog servers and global training update at the cloud.
arXiv Detail & Related papers (2021-07-04T08:03:15Z) - Optimization of User Selection and Bandwidth Allocation for Federated
Learning in VLC/RF Systems [96.56895050190064]
Limited radio frequency (RF) resources restrict the number of users that can participate in federated learning (FL)
This paper introduces visible light communication (VLC) as a supplement to RF in FL and build a hybrid VLC/RF communication system.
The problem of user selection and bandwidth allocation is studied for FL implemented over a hybrid VLC/RF system.
arXiv Detail & Related papers (2021-03-05T02:44:56Z) - Delay Minimization for Federated Learning Over Wireless Communication
Networks [172.42768672943365]
The problem of delay computation for federated learning (FL) over wireless communication networks is investigated.
A bisection search algorithm is proposed to obtain the optimal solution.
Simulation results show that the proposed algorithm can reduce delay by up to 27.3% compared to conventional FL methods.
arXiv Detail & Related papers (2020-07-05T19:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.