Joint Client Assignment and UAV Route Planning for
Indirect-Communication Federated Learning
- URL: http://arxiv.org/abs/2304.10744v2
- Date: Mon, 24 Apr 2023 14:57:18 GMT
- Title: Joint Client Assignment and UAV Route Planning for
Indirect-Communication Federated Learning
- Authors: Jieming Bian, Cong Shen, Jie Xu
- Abstract summary: A new framework called FedEx (Federated Learning via Model Express Delivery) is proposed.
It employs mobile transporters, such as UAVs, to establish indirect communication channels between the server and clients.
Two algorithms, FedEx-Sync and FedEx-Async, are proposed for synchronized and asynchronized learning at the transporter level.
- Score: 20.541942109704987
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is a machine learning approach that enables the
creation of shared models for powerful applications while allowing data to
remain on devices. This approach provides benefits such as improved data
privacy, security, and reduced latency. However, in some systems, direct
communication between clients and servers may not be possible, such as remote
areas without proper communication infrastructure. To overcome this challenge,
a new framework called FedEx (Federated Learning via Model Express Delivery) is
proposed. This framework employs mobile transporters, such as UAVs, to
establish indirect communication channels between the server and clients. These
transporters act as intermediaries and allow for model information exchange.
The use of indirect communication presents new challenges for convergence
analysis and optimization, as the delay introduced by the transporters'
movement creates issues for both global model dissemination and local model
collection. To address this, two algorithms, FedEx-Sync and FedEx-Async, are
proposed for synchronized and asynchronized learning at the transporter level.
Additionally, a bi-level optimization algorithm is proposed to solve the joint
client assignment and route planning problem. Experimental validation using two
public datasets in a simulated network demonstrates consistent results with the
theory, proving the efficacy of FedEx.
Related papers
- FedMoE-DA: Federated Mixture of Experts via Domain Aware Fine-grained Aggregation [22.281467168796645]
Federated learning (FL) is a collaborative machine learning approach that enables multiple clients to train models without sharing their private data.
We propose FedMoE-DA, a new FL model training framework that incorporates a novel domain-aware, fine-grained aggregation strategy to enhance the robustness, personalizability, and communication efficiency simultaneously.
arXiv Detail & Related papers (2024-11-04T14:29:04Z) - FusionLLM: A Decentralized LLM Training System on Geo-distributed GPUs with Adaptive Compression [55.992528247880685]
Decentralized training faces significant challenges regarding system design and efficiency.
We present FusionLLM, a decentralized training system designed and implemented for training large deep neural networks (DNNs)
We show that our system and method can achieve 1.45 - 9.39x speedup compared to baseline methods while ensuring convergence.
arXiv Detail & Related papers (2024-10-16T16:13:19Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - Federated Learning via Indirect Server-Client Communications [20.541942109704987]
Federated Learning (FL) is a communication-efficient and privacy-preserving distributed machine learning framework.
We propose a novel FL framework, named FedEx, that utilizes mobile transporters to establish indirect communication channels between the server and the clients.
Two algorithms, called FedEx-Sync and FedEx-Async, are developed depending on whether the transporters adopt a synchronized or an asynchronized schedule.
arXiv Detail & Related papers (2023-02-14T20:12:36Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv Detail & Related papers (2022-07-20T04:55:18Z) - Accelerating Asynchronous Federated Learning Convergence via Opportunistic Mobile Relaying [3.802258033231335]
We study the impact of mobility on the convergence performance of asynchronous Federated Learning (FL) algorithms.
By exploiting mobility, the study shows that clients can indirectly communicate with the server through another client serving as a relay.
We propose a new FL algorithm, called FedMobile, that incorporates opportunistic relaying and addresses key questions such as when and how to relay.
arXiv Detail & Related papers (2022-06-09T19:23:20Z) - To Talk or to Work: Delay Efficient Federated Learning over Mobile Edge
Devices [13.318419040823088]
Mobile devices collaborate to train a model based on their own data under the coordination of a central server.
Without the central availability of data, computing nodes need to communicate the model updates often to attain convergence.
We propose a delay-efficient FL mechanism that reduces the overall time (consisting of both the computation and communication latencies) and communication rounds required for the model to converge.
arXiv Detail & Related papers (2021-11-01T00:35:32Z) - Double Momentum SGD for Federated Learning [94.58442574293021]
We propose a new SGD variant named as DOMO to improve the model performance in federated learning.
One momentum buffer tracks the server update direction, while the other tracks the local update direction.
We introduce a novel server momentum fusion technique to coordinate the server and local momentum SGD.
arXiv Detail & Related papers (2021-02-08T02:47:24Z) - FedAT: A High-Performance and Communication-Efficient Federated Learning
System with Asynchronous Tiers [22.59875034596411]
We present FedAT, a novel Federated learning method with Asynchronous Tiers under Non-i.i.d. data.
FedAT minimizes the straggler effect with improved convergence speed and test accuracy.
Results show that FedAT improves the prediction performance by up to 21.09%, and reduces the communication cost by up to 8.5x, compared to state-of-the-art FL methods.
arXiv Detail & Related papers (2020-10-12T18:38:51Z) - Communication-Efficient and Distributed Learning Over Wireless Networks:
Principles and Applications [55.65768284748698]
Machine learning (ML) is a promising enabler for the fifth generation (5G) communication systems and beyond.
This article aims to provide a holistic overview of relevant communication and ML principles, and thereby present communication-efficient and distributed learning frameworks with selected use cases.
arXiv Detail & Related papers (2020-08-06T12:37:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.