Federated Learning Model Aggregation in Heterogenous Aerial and Space Networks
- URL: http://arxiv.org/abs/2305.16351v3
- Date: Tue, 16 Apr 2024 18:50:45 GMT
- Title: Federated Learning Model Aggregation in Heterogenous Aerial and Space Networks
- Authors: Fan Dong, Ali Abbasi, Henry Leung, Xin Wang, Jiayu Zhou, Steve Drew,
- Abstract summary: Federated learning is a promising approach under the constraints of networking and data privacy constraints in aerial and space networks (ASNs)
We propose a novel framework that emphasizes updates from high-diversity clients and diminishes the influence of those from low-diversity clients.
We show that WeiAvgCS could converge 46% faster on FashionMNIST and 38% faster on CIFAR10 than its benchmarks on average.
- Score: 36.283273000969636
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning offers a promising approach under the constraints of networking and data privacy constraints in aerial and space networks (ASNs), utilizing large-scale private edge data from drones, balloons, and satellites. Existing research has extensively studied the optimization of the learning process, computing efficiency, and communication overhead. An important yet often overlooked aspect is that participants contribute predictive knowledge with varying diversity of knowledge, affecting the quality of the learned federated models. In this paper, we propose a novel approach to address this issue by introducing a Weighted Averaging and Client Selection (WeiAvgCS) framework that emphasizes updates from high-diversity clients and diminishes the influence of those from low-diversity clients. Direct sharing of the data distribution may be prohibitive due to the additional private information that is sent from the clients. As such, we introduce an estimation for the diversity using a projection-based method. Extensive experiments have been performed to show WeiAvgCS's effectiveness. WeiAvgCS could converge 46% faster on FashionMNIST and 38% faster on CIFAR10 than its benchmarks on average in our experiments.
Related papers
- Optimizing Federated Learning by Entropy-Based Client Selection [13.851391819710367]
Deep learning domains typically require an extensive amount of data for optimal performance.
FedOptEnt is designed to mitigate performance issues caused by label distribution skew.
The proposed method outperforms several state-of-the-art algorithms by up to 6% in classification accuracy.
arXiv Detail & Related papers (2024-11-02T13:31:36Z) - Personalized Federated Learning with Attention-based Client Selection [57.71009302168411]
We propose FedACS, a new PFL algorithm with an Attention-based Client Selection mechanism.
FedACS integrates an attention mechanism to enhance collaboration among clients with similar data distributions.
Experiments on CIFAR10 and FMNIST validate FedACS's superiority.
arXiv Detail & Related papers (2023-12-23T03:31:46Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - Federated Learning Under Intermittent Client Availability and
Time-Varying Communication Constraints [29.897785907692644]
Federated learning systems operate in settings with intermittent client availability and/or time-varying communication constraints.
We propose F3AST, an unbiased algorithm that learns an availability-dependent client selection strategy.
We show up to 186% and 8% accuracy improvements over FedAvg, and 8% and 7% over FedAdam on CIFAR100 and Shakespeare, respectively.
arXiv Detail & Related papers (2022-05-13T16:08:58Z) - SPATL: Salient Parameter Aggregation and Transfer Learning for
Heterogeneous Clients in Federated Learning [3.5394650810262336]
Efficient federated learning is one of the key challenges for training and deploying AI models on edge devices.
Maintaining data privacy in federated learning raises several challenges including data heterogeneity, expensive communication cost, and limited resources.
We propose a salient parameter selection agent based on deep reinforcement learning on local clients, and aggregating the selected salient parameters on the central server.
arXiv Detail & Related papers (2021-11-29T06:28:05Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - Quasi-Global Momentum: Accelerating Decentralized Deep Learning on
Heterogeneous Data [77.88594632644347]
Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks.
In realistic learning scenarios, the presence of heterogeneity across different clients' local datasets poses an optimization challenge.
We propose a novel momentum-based method to mitigate this decentralized training difficulty.
arXiv Detail & Related papers (2021-02-09T11:27:14Z) - CatFedAvg: Optimising Communication-efficiency and Classification
Accuracy in Federated Learning [2.2172881631608456]
We introduce a new family of Federated Learning algorithms called CatFedAvg.
It improves the communication efficiency but improves the quality of learning using a category coverage inNIST strategy.
Our experiments show that an increase of 10% absolute points accuracy using the M dataset with 70% absolute points lower network transfer over FedAvg.
arXiv Detail & Related papers (2020-11-14T06:52:02Z) - A Theoretical Perspective on Differentially Private Federated Multi-task
Learning [12.935153199667987]
collaborative learning models need to be developed with respect to both privacy and utility concerns.
We propose a new federated multi-task for effective parameter transfer differential privacy to protect at the client level.
We are the first to provide both privacy utility guarantees for such a proposed algorithm.
arXiv Detail & Related papers (2020-11-14T00:53:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.