FLSTRA: Federated Learning in Stratosphere
- URL: http://arxiv.org/abs/2302.00163v3
- Date: Fri, 9 Jun 2023 14:26:29 GMT
- Title: FLSTRA: Federated Learning in Stratosphere
- Authors: Amin Farajzadeh, Animesh Yadav, Omid Abbasi, Wael Jaafar, Halim
Yanikomeroglu
- Abstract summary: A high altitude platform station facilitates a number of terrestrial clients to collaboratively learn a global model without the training data.
We develop a joint client selection and resource allocation algorithm for uplink and downlink to minimize the FL delay.
Second, we propose a communication and resource-aware algorithm to achieve the target FL accuracy while deriving an upper bound for its convergence.
- Score: 22.313423693397556
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a federated learning (FL) in stratosphere (FLSTRA) system, where a
high altitude platform station (HAPS) facilitates a large number of terrestrial
clients to collaboratively learn a global model without sharing the training
data. FLSTRA overcomes the challenges faced by FL in terrestrial networks, such
as slow convergence and high communication delay due to limited client
participation and multi-hop communications. HAPS leverages its altitude and
size to allow the participation of more clients with line-of-sight (LOS) links
and the placement of a powerful server. However, handling many clients at once
introduces computing and transmission delays. Thus, we aim to obtain a
delay-accuracy trade-off for FLSTRA. Specifically, we first develop a joint
client selection and resource allocation algorithm for uplink and downlink to
minimize the FL delay subject to the energy and quality-of-service (QoS)
constraints. Second, we propose a communication and computation resource-aware
(CCRA-FL) algorithm to achieve the target FL accuracy while deriving an upper
bound for its convergence rate. The formulated problem is non-convex; thus, we
propose an iterative algorithm to solve it. Simulation results demonstrate the
effectiveness of the proposed FLSTRA system, compared to terrestrial
benchmarks, in terms of FL delay and accuracy.
Related papers
- Hyperdimensional Computing Empowered Federated Foundation Model over Wireless Networks for Metaverse [56.384390765357004]
We propose an integrated federated split learning and hyperdimensional computing framework for emerging foundation models.
This novel approach reduces communication costs, computation load, and privacy risks, making it suitable for resource-constrained edge devices in the Metaverse.
arXiv Detail & Related papers (2024-08-26T17:03:14Z) - Client Orchestration and Cost-Efficient Joint Optimization for
NOMA-Enabled Hierarchical Federated Learning [55.49099125128281]
We propose a non-orthogonal multiple access (NOMA) enabled HFL system under semi-synchronous cloud model aggregation.
We show that the proposed scheme outperforms the considered benchmarks regarding HFL performance improvement and total cost reduction.
arXiv Detail & Related papers (2023-11-03T13:34:44Z) - Effectively Heterogeneous Federated Learning: A Pairing and Split
Learning Based Approach [16.093068118849246]
This paper presents a novel split federated learning (SFL) framework that pairs clients with different computational resources.
A greedy algorithm is proposed by reconstructing the optimization of training latency as a graph edge selection problem.
Simulation results show the proposed method can significantly improve the FL training speed and achieve high performance.
arXiv Detail & Related papers (2023-08-26T11:10:54Z) - Learner Referral for Cost-Effective Federated Learning Over Hierarchical
IoT Networks [21.76836812021954]
This paper aided federated selection (LRef-FedCS), communications resource, and local model accuracy (LMAO) methods.
Our proposed LRef-FedCS approach could achieve a good balance between high global accuracy and reducing cost.
arXiv Detail & Related papers (2023-07-19T13:33:43Z) - A-LAQ: Adaptive Lazily Aggregated Quantized Gradient [11.990047476303252]
Federated Learning (FL) plays a prominent role in solving machine learning problems with data distributed across clients.
In FL, to reduce the communication overhead of data between clients and the server, each client communicates the local FL parameters instead of the local data.
This paper proposes Adaptive Lazily Aggregated Quantized Gradient (A-LAQ), which significantly extends LAQ by assigning an adaptive number of communication bits during the FL iterations.
arXiv Detail & Related papers (2022-10-31T16:59:58Z) - Predictive GAN-powered Multi-Objective Optimization for Hybrid Federated
Split Learning [56.125720497163684]
We propose a hybrid federated split learning framework in wireless networks.
We design a parallel computing scheme for model splitting without label sharing, and theoretically analyze the influence of the delayed gradient caused by the scheme on the convergence speed.
arXiv Detail & Related papers (2022-09-02T10:29:56Z) - Federated learning for LEO constellations via inter-HAP links [0.0]
Low Earth Obit (LEO) satellite constellations have seen a sharp increase of deployment in recent years.
To apply machine learning (ML) in such applications, the traditional way of downloading satellite data such as imagery to a ground station (GS) is not desirable.
We show that existing FL solutions do not fit well in such LEO constellation scenarios because of significant challenges such as excessive convergence delay and unreliable wireless channels.
arXiv Detail & Related papers (2022-05-15T08:22:52Z) - OFedQIT: Communication-Efficient Online Federated Learning via
Quantization and Intermittent Transmission [7.6058140480517356]
Online federated learning (OFL) is a promising framework to collaboratively learn a sequence of non-linear functions (or models) from distributed streaming data.
We propose a communication-efficient OFL algorithm (named OFedQIT) by means of a quantization and an intermittent transmission.
Our analysis reveals that OFedQIT successfully addresses the drawbacks of OFedAvg while maintaining superior learning accuracy.
arXiv Detail & Related papers (2022-05-13T07:46:43Z) - Time-triggered Federated Learning over Wireless Networks [48.389824560183776]
We present a time-triggered FL algorithm (TT-Fed) over wireless networks.
Our proposed TT-Fed algorithm improves the converged test accuracy by up to 12.5% and 5%, respectively.
arXiv Detail & Related papers (2022-04-26T16:37:29Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Delay Minimization for Federated Learning Over Wireless Communication
Networks [172.42768672943365]
The problem of delay computation for federated learning (FL) over wireless communication networks is investigated.
A bisection search algorithm is proposed to obtain the optimal solution.
Simulation results show that the proposed algorithm can reduce delay by up to 27.3% compared to conventional FL methods.
arXiv Detail & Related papers (2020-07-05T19:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.