Federated Learning via Indirect Server-Client Communications
- URL: http://arxiv.org/abs/2302.07323v1
- Date: Tue, 14 Feb 2023 20:12:36 GMT
- Title: Federated Learning via Indirect Server-Client Communications
- Authors: Jieming Bian, Cong Shen, Jie Xu
- Abstract summary: Federated Learning (FL) is a communication-efficient and privacy-preserving distributed machine learning framework.
We propose a novel FL framework, named FedEx, that utilizes mobile transporters to establish indirect communication channels between the server and the clients.
Two algorithms, called FedEx-Sync and FedEx-Async, are developed depending on whether the transporters adopt a synchronized or an asynchronized schedule.
- Score: 20.541942109704987
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is a communication-efficient and privacy-preserving
distributed machine learning framework that has gained a significant amount of
research attention recently. Despite the different forms of FL algorithms
(e.g., synchronous FL, asynchronous FL) and the underlying optimization
methods, nearly all existing works implicitly assumed the existence of a
communication infrastructure that facilitates the direct communication between
the server and the clients for the model data exchange. This assumption,
however, does not hold in many real-world applications that can benefit from
distributed learning but lack a proper communication infrastructure (e.g.,
smart sensing in remote areas). In this paper, we propose a novel FL framework,
named FedEx (short for FL via Model Express Delivery), that utilizes mobile
transporters (e.g., Unmanned Aerial Vehicles) to establish indirect
communication channels between the server and the clients. Two algorithms,
called FedEx-Sync and FedEx-Async, are developed depending on whether the
transporters adopt a synchronized or an asynchronized schedule. Even though the
indirect communications introduce heterogeneous delays to clients for both the
global model dissemination and the local model collection, we prove the
convergence of both versions of FedEx. The convergence analysis subsequently
sheds lights on how to assign clients to different transporters and design the
routes among the clients. The performance of FedEx is evaluated through
experiments in a simulated network on two public datasets.
Related papers
- FedMoE-DA: Federated Mixture of Experts via Domain Aware Fine-grained Aggregation [22.281467168796645]
Federated learning (FL) is a collaborative machine learning approach that enables multiple clients to train models without sharing their private data.
We propose FedMoE-DA, a new FL model training framework that incorporates a novel domain-aware, fine-grained aggregation strategy to enhance the robustness, personalizability, and communication efficiency simultaneously.
arXiv Detail & Related papers (2024-11-04T14:29:04Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Joint Client Assignment and UAV Route Planning for
Indirect-Communication Federated Learning [20.541942109704987]
A new framework called FedEx (Federated Learning via Model Express Delivery) is proposed.
It employs mobile transporters, such as UAVs, to establish indirect communication channels between the server and clients.
Two algorithms, FedEx-Sync and FedEx-Async, are proposed for synchronized and asynchronized learning at the transporter level.
arXiv Detail & Related papers (2023-04-21T04:47:54Z) - Federated Nearest Neighbor Machine Translation [66.8765098651988]
In this paper, we propose a novel federated nearest neighbor (FedNN) machine translation framework.
FedNN leverages one-round memorization-based interaction to share knowledge across different clients.
Experiments show that FedNN significantly reduces computational and communication costs compared with FedAvg.
arXiv Detail & Related papers (2023-02-23T18:04:07Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - Accelerating Asynchronous Federated Learning Convergence via Opportunistic Mobile Relaying [3.802258033231335]
We study the impact of mobility on the convergence performance of asynchronous Federated Learning (FL) algorithms.
By exploiting mobility, the study shows that clients can indirectly communicate with the server through another client serving as a relay.
We propose a new FL algorithm, called FedMobile, that incorporates opportunistic relaying and addresses key questions such as when and how to relay.
arXiv Detail & Related papers (2022-06-09T19:23:20Z) - Double Momentum SGD for Federated Learning [94.58442574293021]
We propose a new SGD variant named as DOMO to improve the model performance in federated learning.
One momentum buffer tracks the server update direction, while the other tracks the local update direction.
We introduce a novel server momentum fusion technique to coordinate the server and local momentum SGD.
arXiv Detail & Related papers (2021-02-08T02:47:24Z) - FedAT: A High-Performance and Communication-Efficient Federated Learning
System with Asynchronous Tiers [22.59875034596411]
We present FedAT, a novel Federated learning method with Asynchronous Tiers under Non-i.i.d. data.
FedAT minimizes the straggler effect with improved convergence speed and test accuracy.
Results show that FedAT improves the prediction performance by up to 21.09%, and reduces the communication cost by up to 8.5x, compared to state-of-the-art FL methods.
arXiv Detail & Related papers (2020-10-12T18:38:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.