To Talk or to Work: Delay Efficient Federated Learning over Mobile Edge
Devices
- URL: http://arxiv.org/abs/2111.00637v1
- Date: Mon, 1 Nov 2021 00:35:32 GMT
- Title: To Talk or to Work: Delay Efficient Federated Learning over Mobile Edge
Devices
- Authors: Pavana Prakash, Jiahao Ding, Maoqiang Wu, Minglei Shu, Rong Yu, and
Miao Pan
- Abstract summary: Mobile devices collaborate to train a model based on their own data under the coordination of a central server.
Without the central availability of data, computing nodes need to communicate the model updates often to attain convergence.
We propose a delay-efficient FL mechanism that reduces the overall time (consisting of both the computation and communication latencies) and communication rounds required for the model to converge.
- Score: 13.318419040823088
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL), an emerging distributed machine learning paradigm,
in conflux with edge computing is a promising area with novel applications over
mobile edge devices. In FL, since mobile devices collaborate to train a model
based on their own data under the coordination of a central server by sharing
just the model updates, training data is maintained private. However, without
the central availability of data, computing nodes need to communicate the model
updates often to attain convergence. Hence, the local computation time to
create local model updates along with the time taken for transmitting them to
and from the server result in a delay in the overall time. Furthermore,
unreliable network connections may obstruct an efficient communication of these
updates. To address these, in this paper, we propose a delay-efficient FL
mechanism that reduces the overall time (consisting of both the computation and
communication latencies) and communication rounds required for the model to
converge. Exploring the impact of various parameters contributing to delay, we
seek to balance the trade-off between wireless communication (to talk) and
local computation (to work). We formulate a relation with overall time as an
optimization problem and demonstrate the efficacy of our approach through
extensive simulations.
Related papers
- Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Time-sensitive Learning for Heterogeneous Federated Edge Intelligence [52.83633954857744]
We investigate real-time machine learning in a federated edge intelligence (FEI) system.
FEI systems exhibit heterogenous communication and computational resource distribution.
We propose a time-sensitive federated learning (TS-FL) framework to minimize the overall run-time for collaboratively training a shared ML model.
arXiv Detail & Related papers (2023-01-26T08:13:22Z) - Scheduling and Aggregation Design for Asynchronous Federated Learning
over Wireless Networks [56.91063444859008]
Federated Learning (FL) is a collaborative machine learning framework that combines on-device training and server-based aggregation.
We propose an asynchronous FL design with periodic aggregation to tackle the straggler issue in FL systems.
We show that an age-aware'' aggregation weighting design can significantly improve the learning performance in an asynchronous FL setting.
arXiv Detail & Related papers (2022-12-14T17:33:01Z) - Collaborative Learning over Wireless Networks: An Introductory Overview [84.09366153693361]
We will mainly focus on collaborative training across wireless devices.
Many distributed optimization algorithms have been developed over the last decades.
They provide data locality; that is, a joint model can be trained collaboratively while the data available at each participating device remains local.
arXiv Detail & Related papers (2021-12-07T20:15:39Z) - Spatio-Temporal Federated Learning for Massive Wireless Edge Networks [23.389249751372393]
An edge server and numerous mobile devices (clients) jointly learn a global model without transporting huge amount of data collected by the mobile devices to the edge server.
The proposed FL approach exploits spatial and temporal correlations between learning updates from different mobile devices scheduled to join STFL in various trainings.
An analytical framework of STFL is proposed and employed to study the learning capability of STFL via its convergence performance.
arXiv Detail & Related papers (2021-10-27T16:46:45Z) - User Scheduling for Federated Learning Through Over-the-Air Computation [22.853678584121862]
A new machine learning technique termed as federated learning (FL) aims to preserve data at the edge devices and to only exchange ML model parameters in the learning process.
FL not only reduces the communication needs but also helps to protect the local privacy.
AirComp is capable of computing while transmitting data by allowing multiple devices to send data simultaneously by using analog modulation.
arXiv Detail & Related papers (2021-08-05T23:58:15Z) - Accelerating Federated Edge Learning via Optimized Probabilistic Device
Scheduling [57.271494741212166]
This paper formulates and solves the communication time minimization problem.
It is found that the optimized policy gradually turns its priority from suppressing the remaining communication rounds to reducing per-round latency as the training process evolves.
The effectiveness of the proposed scheme is demonstrated via a use case on collaborative 3D objective detection in autonomous driving.
arXiv Detail & Related papers (2021-07-24T11:39:17Z) - Cross-Node Federated Graph Neural Network for Spatio-Temporal Data
Modeling [13.426382746638007]
We propose a graph neural network (GNN)-based architecture under the constraint of cross-node federated learning.
CNFGNN operates by disentangling the temporal computation on devices and spatial dynamics on the server.
Experiments show that CNFGNN achieves the best forecasting performance in both transductive and inductive learning settings.
arXiv Detail & Related papers (2021-06-09T17:12:43Z) - Fast-Convergent Federated Learning [82.32029953209542]
Federated learning is a promising solution for distributing machine learning tasks through modern networks of mobile devices.
We propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training.
arXiv Detail & Related papers (2020-07-26T14:37:51Z) - Coded Federated Learning [5.375775284252717]
Federated learning is a method of training a global model from decentralized data distributed across client devices.
Our results show that CFL allows the global model to converge nearly four times faster when compared to an uncoded approach.
arXiv Detail & Related papers (2020-02-21T23:06:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.