FedSAE: A Novel Self-Adaptive Federated Learning Framework in
Heterogeneous Systems
- URL: http://arxiv.org/abs/2104.07515v1
- Date: Thu, 15 Apr 2021 15:14:11 GMT
- Title: FedSAE: A Novel Self-Adaptive Federated Learning Framework in
Heterogeneous Systems
- Authors: Li Li, Moming Duan, Duo Liu, Yu Zhang, Ao Ren, Xianzhang Chen, Yujuan
Tan, Chengliang Wang
- Abstract summary: Federated Learning (FL) is a novel distributed machine learning which allows thousands of edge devices to train model locally without uploading data concentrically to the server.
We introduce a novel self-adaptive federated framework FedSAE which adjusts the training task of devices automatically and selects participants actively to alleviate the performance degradation.
In our framework, the server evaluates devices' value of training based on their training loss. Then the server selects those clients with bigger value for the global model to reduce communication overhead.
- Score: 14.242716751043533
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) is a novel distributed machine learning which allows
thousands of edge devices to train model locally without uploading data
concentrically to the server. But since real federated settings are
resource-constrained, FL is encountered with systems heterogeneity which causes
a lot of stragglers directly and then leads to significantly accuracy reduction
indirectly. To solve the problems caused by systems heterogeneity, we introduce
a novel self-adaptive federated framework FedSAE which adjusts the training
task of devices automatically and selects participants actively to alleviate
the performance degradation. In this work, we 1) propose FedSAE which leverages
the complete information of devices' historical training tasks to predict the
affordable training workloads for each device. In this way, FedSAE can estimate
the reliability of each device and self-adaptively adjust the amount of
training load per client in each round. 2) combine our framework with Active
Learning to self-adaptively select participants. Then the framework accelerates
the convergence of the global model. In our framework, the server evaluates
devices' value of training based on their training loss. Then the server
selects those clients with bigger value for the global model to reduce
communication overhead. The experimental result indicates that in a highly
heterogeneous system, FedSAE converges faster than FedAvg, the vanilla FL
framework. Furthermore, FedSAE outperforms than FedAvg on several federated
datasets - FedSAE improves test accuracy by 26.7% and reduces stragglers by
90.3% on average.
Related papers
- Efficient Asynchronous Federated Learning with Sparsification and
Quantization [55.6801207905772]
Federated Learning (FL) is attracting more and more attention to collaboratively train a machine learning model without transferring raw data.
FL generally exploits a parameter server and a large number of edge devices during the whole process of the model training.
We propose TEASQ-Fed to exploit edge devices to asynchronously participate in the training process by actively applying for tasks.
arXiv Detail & Related papers (2023-12-23T07:47:07Z) - FedFNN: Faster Training Convergence Through Update Predictions in
Federated Recommender Systems [4.4273123155989715]
Federated Learning (FL) has emerged as a key approach for distributed machine learning.
This paper introduces FedFNN, an algorithm that accelerates decentralized model training.
arXiv Detail & Related papers (2023-09-14T13:18:43Z) - FedHiSyn: A Hierarchical Synchronous Federated Learning Framework for
Resource and Data Heterogeneity [56.82825745165945]
Federated Learning (FL) enables training a global model without sharing the decentralized raw data stored on multiple devices to protect data privacy.
We propose a hierarchical synchronous FL framework, i.e., FedHiSyn, to tackle the problems of straggler effects and outdated models.
We evaluate the proposed framework based on MNIST, EMNIST, CIFAR10 and CIFAR100 datasets and diverse heterogeneous settings of devices.
arXiv Detail & Related papers (2022-06-21T17:23:06Z) - FedSA: Accelerating Intrusion Detection in Collaborative Environments
with Federated Simulated Annealing [2.7011265453906983]
Federated learning emerges as a solution to collaborative training for an Intrusion Detection System (IDS)
This paper proposes the Federated Simulated Annealing (FedSA) metaheuristic to select the hyper parameters and a subset of participants for each aggregation round in federated learning.
The proposal requires up to 50% fewer aggregation rounds to achieve approximately 97% accuracy in attack detection than the conventional aggregation approach.
arXiv Detail & Related papers (2022-05-23T14:27:56Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Federated Dynamic Sparse Training: Computing Less, Communicating Less,
Yet Learning Better [88.28293442298015]
Federated learning (FL) enables distribution of machine learning workloads from the cloud to resource-limited edge devices.
We develop, implement, and experimentally validate a novel FL framework termed Federated Dynamic Sparse Training (FedDST)
FedDST is a dynamic process that extracts and trains sparse sub-networks from the target full network.
arXiv Detail & Related papers (2021-12-18T02:26:38Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - Fed-Focal Loss for imbalanced data classification in Federated Learning [2.2172881631608456]
Federated Learning has a central server coordinating the training of a model on a network of devices.
One of the challenges is variable training performance when the dataset has a class imbalance.
We propose to address the class imbalance by reshaping cross-entropy loss such that it down-weights the loss assigned to well-classified examples along the lines of focal loss.
arXiv Detail & Related papers (2020-11-12T09:52:14Z) - FedGroup: Efficient Clustered Federated Learning via Decomposed
Data-Driven Measure [18.083188787905083]
We propose a novel clustered federated learning (CFL) framework FedGroup.
We show that FedGroup can significantly improve absolute test accuracy by +14.1% on FEMNIST compared to FedAvg.
We also evaluate FedGroup and FedGrouProx (combined with FedProx) on several open datasets.
arXiv Detail & Related papers (2020-10-14T08:15:34Z) - Fast-Convergent Federated Learning [82.32029953209542]
Federated learning is a promising solution for distributing machine learning tasks through modern networks of mobile devices.
We propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training.
arXiv Detail & Related papers (2020-07-26T14:37:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.