Federated Learning Under Intermittent Client Availability and
Time-Varying Communication Constraints
- URL: http://arxiv.org/abs/2205.06730v1
- Date: Fri, 13 May 2022 16:08:58 GMT
- Title: Federated Learning Under Intermittent Client Availability and
Time-Varying Communication Constraints
- Authors: Monica Ribero and Haris Vikalo and Gustavo De Veciana
- Abstract summary: Federated learning systems operate in settings with intermittent client availability and/or time-varying communication constraints.
We propose F3AST, an unbiased algorithm that learns an availability-dependent client selection strategy.
We show up to 186% and 8% accuracy improvements over FedAvg, and 8% and 7% over FedAdam on CIFAR100 and Shakespeare, respectively.
- Score: 29.897785907692644
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning systems facilitate training of global models in settings
where potentially heterogeneous data is distributed across a large number of
clients. Such systems operate in settings with intermittent client availability
and/or time-varying communication constraints. As a result, the global models
trained by federated learning systems may be biased towards clients with higher
availability. We propose F3AST, an unbiased algorithm that dynamically learns
an availability-dependent client selection strategy which asymptotically
minimizes the impact of client-sampling variance on the global model
convergence, enhancing performance of federated learning. The proposed
algorithm is tested in a variety of settings for intermittently available
clients under communication constraints, and its efficacy demonstrated on
synthetic data and realistically federated benchmarking experiments using
CIFAR100 and Shakespeare datasets. We show up to 186% and 8% accuracy
improvements over FedAvg, and 8% and 7% over FedAdam on CIFAR100 and
Shakespeare, respectively.
Related papers
- Adversarial Federated Consensus Learning for Surface Defect Classification Under Data Heterogeneity in IIoT [8.48069043458347]
It's difficult to collect and centralize sufficient training data from various entities in Industrial Internet of Things (IIoT)
Federated learning (FL) provides a solution by enabling collaborative global model training across clients.
We propose a novel personalized FL approach, named Adversarial Federated Consensus Learning (AFedCL)
arXiv Detail & Related papers (2024-09-24T03:59:32Z) - FedCAda: Adaptive Client-Side Optimization for Accelerated and Stable Federated Learning [57.38427653043984]
Federated learning (FL) has emerged as a prominent approach for collaborative training of machine learning models across distributed clients.
We introduce FedCAda, an innovative federated client adaptive algorithm designed to tackle this challenge.
We demonstrate that FedCAda outperforms the state-of-the-art methods in terms of adaptability, convergence, stability, and overall performance.
arXiv Detail & Related papers (2024-05-20T06:12:33Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Personalized Federated Learning via Amortized Bayesian Meta-Learning [21.126405589760367]
We introduce a new perspective on personalized federated learning through Amortized Bayesian Meta-Learning.
Specifically, we propose a novel algorithm called emphFedABML, which employs hierarchical variational inference across clients.
Our theoretical analysis provides an upper bound on the average generalization error and guarantees the generalization performance on unseen data.
arXiv Detail & Related papers (2023-07-05T11:58:58Z) - FedSampling: A Better Sampling Strategy for Federated Learning [81.85411484302952]
Federated learning (FL) is an important technique for learning models from decentralized data in a privacy-preserving way.
Existing FL methods usually uniformly sample clients for local model learning in each round.
We propose a novel data uniform sampling strategy for federated learning (FedSampling)
arXiv Detail & Related papers (2023-06-25T13:38:51Z) - Federated Graph-based Sampling with Arbitrary Client Availability [34.95352685954059]
We propose a framework named Federated Graph-based Sampling (FedGS) to stabilize the global model update and mitigate the long-term bias given arbitrary client availability simultaneously.
Our experimental results confirm FedGS's advantage in both enabling a fair client-sampling scheme and improving the model performance under arbitrary client availability.
arXiv Detail & Related papers (2022-11-25T09:38:20Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - CatFedAvg: Optimising Communication-efficiency and Classification
Accuracy in Federated Learning [2.2172881631608456]
We introduce a new family of Federated Learning algorithms called CatFedAvg.
It improves the communication efficiency but improves the quality of learning using a category coverage inNIST strategy.
Our experiments show that an increase of 10% absolute points accuracy using the M dataset with 70% absolute points lower network transfer over FedAvg.
arXiv Detail & Related papers (2020-11-14T06:52:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.