Fast Federated Edge Learning with Overlapped Communication and
Computation and Channel-Aware Fair Client Scheduling
- URL: http://arxiv.org/abs/2109.06710v1
- Date: Tue, 14 Sep 2021 14:16:01 GMT
- Title: Fast Federated Edge Learning with Overlapped Communication and
Computation and Channel-Aware Fair Client Scheduling
- Authors: Mehmet Emre Ozfatura, Junlin Zhao, and Deniz G\"und\"uz
- Abstract summary: We consider federated edge learning (FEEL) over wireless fading channels taking into account the downlink and uplink channel latencies.
We propose two alternative schemes with fairness considerations, termed as age-aware MRTP (A-MRTP), and opportunistically fair MRTP (OF-MRTP)
It is shown through numerical simulations that OF-MRTP provides significant reduction in latency without sacrificing test accuracy.
- Score: 2.294014185517203
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider federated edge learning (FEEL) over wireless fading channels
taking into account the downlink and uplink channel latencies, and the random
computation delays at the clients. We speed up the training process by
overlapping the communication with computation. With fountain coded
transmission of the global model update, clients receive the global model
asynchronously, and start performing local computations right away. Then, we
propose a dynamic client scheduling policy, called MRTP, for uploading local
model updates to the parameter server (PS), which, at any time, schedules the
client with the minimum remaining upload time. However, MRTP can lead to biased
participation of clients in the update process, resulting in performance
degradation in non-iid data scenarios. To overcome this, we propose two
alternative schemes with fairness considerations, termed as age-aware MRTP
(A-MRTP), and opportunistically fair MRTP (OF-MRTP). In A-MRTP, the remaining
clients are scheduled according to the ratio between their remaining
transmission time and the update age, while in OF-MRTP, the selection mechanism
utilizes the long term average channel rate of the clients to further reduce
the latency while ensuring fair participation of the clients. It is shown
through numerical simulations that OF-MRTP provides significant reduction in
latency without sacrificing test accuracy.
Related papers
- Modelling Concurrent RTP Flows for End-to-end Predictions of QoS in Real Time Communications [5.159808922904932]
We propose Packet-to-Prediction (P2P), a novel deep learning framework for predicting Quality of Service (QoS) metrics.
We implement a streamlined architecture, capable of handling an unlimited number of RTP flows, and employ a multi-task learning paradigm to forecast four key metrics in a single shot.
Our work is based on extensive traffic collected during real video calls, and conclusively, P2P excels comparative models in both prediction performance and temporal efficiency.
arXiv Detail & Related papers (2024-10-21T10:16:56Z) - Risk-Aware Accelerated Wireless Federated Learning with Heterogeneous
Clients [21.104752782245257]
Wireless Federated Learning (FL) is an emerging distributed machine learning paradigm.
This paper proposes a novel risk-aware accelerated FL framework that accounts for the clients heterogeneity in the amount of possessed data.
The proposed scheme is benchmarked against a conservative scheme (i.e., only allowing trustworthy devices) and an aggressive scheme (i.e. oblivious to the trust metric)
arXiv Detail & Related papers (2024-01-17T15:15:52Z) - Client Orchestration and Cost-Efficient Joint Optimization for
NOMA-Enabled Hierarchical Federated Learning [55.49099125128281]
We propose a non-orthogonal multiple access (NOMA) enabled HFL system under semi-synchronous cloud model aggregation.
We show that the proposed scheme outperforms the considered benchmarks regarding HFL performance improvement and total cost reduction.
arXiv Detail & Related papers (2023-11-03T13:34:44Z) - Joint Age-based Client Selection and Resource Allocation for
Communication-Efficient Federated Learning over NOMA Networks [8.030674576024952]
In federated learning (FL), distributed clients can collaboratively train a shared global model while retaining their own training data locally.
In this paper, a joint optimization problem of client selection and resource allocation is formulated, aiming to minimize the total time consumption of each round in FL over a non-orthogonal multiple access (NOMA) enabled wireless network.
In addition, a server-side artificial neural network (ANN) is proposed to predict the FL models of clients who are not selected at each round to further improve FL performance.
arXiv Detail & Related papers (2023-04-18T13:58:16Z) - Robust Federated Learning with Connectivity Failures: A
Semi-Decentralized Framework with Collaborative Relaying [27.120495678791883]
Intermittent client connectivity is one of the major challenges in centralized federated edge learning frameworks.
We propose a collaborative relaying based semi-decentralized federated edge learning framework.
arXiv Detail & Related papers (2022-02-24T01:06:42Z) - Communication-Efficient Federated Learning with Accelerated Client Gradient [46.81082897703729]
Federated learning often suffers from slow and unstable convergence due to the heterogeneous characteristics of participating client datasets.
We propose a simple but effective federated learning framework, which improves the consistency across clients and facilitates the convergence of the server model.
We provide the theoretical convergence rate of our algorithm and demonstrate remarkable performance gains in terms of accuracy and communication efficiency.
arXiv Detail & Related papers (2022-01-10T05:31:07Z) - Low-Latency Federated Learning over Wireless Channels with Differential
Privacy [142.5983499872664]
In federated learning (FL), model training is distributed over clients and local models are aggregated by a central server.
In this paper, we aim to minimize FL training delay over wireless channels, constrained by overall training performance as well as each client's differential privacy (DP) requirement.
arXiv Detail & Related papers (2021-06-20T13:51:18Z) - Joint Client Scheduling and Resource Allocation under Channel
Uncertainty in Federated Learning [47.97586668316476]
Federated learning (FL) over wireless networks depends on the reliability of the client-server connectivity and clients' local computation capabilities.
In this article, we investigate the problem of client scheduling and resource block (RB) allocation to enhance the performance of model training using FL.
A proposed method reduces the gap of the training accuracy loss by up to 40.7% compared to state-of-theart client scheduling and RB allocation methods.
arXiv Detail & Related papers (2021-06-12T15:18:48Z) - Straggler-Resilient Federated Learning: Leveraging the Interplay Between
Statistical Accuracy and System Heterogeneity [57.275753974812666]
Federated learning involves learning from data samples distributed across a network of clients while the data remains local.
In this paper, we propose a novel straggler-resilient federated learning method that incorporates statistical characteristics of the clients' data to adaptively select the clients in order to speed up the learning procedure.
arXiv Detail & Related papers (2020-12-28T19:21:14Z) - Multi-Armed Bandit Based Client Scheduling for Federated Learning [91.91224642616882]
federated learning (FL) features ubiquitous properties such as reduction of communication overhead and preserving data privacy.
In each communication round of FL, the clients update local models based on their own data and upload their local updates via wireless channels.
This work provides a multi-armed bandit-based framework for online client scheduling (CS) in FL without knowing wireless channel state information and statistical characteristics of clients.
arXiv Detail & Related papers (2020-07-05T12:32:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.