FedHiSyn: A Hierarchical Synchronous Federated Learning Framework for
Resource and Data Heterogeneity
- URL: http://arxiv.org/abs/2206.10546v1
- Date: Tue, 21 Jun 2022 17:23:06 GMT
- Title: FedHiSyn: A Hierarchical Synchronous Federated Learning Framework for
Resource and Data Heterogeneity
- Authors: Guanghao Li, Yue Hu, Miao Zhang, Ji Liu, Quanjun Yin, Yong Peng,
Dejing Dou
- Abstract summary: Federated Learning (FL) enables training a global model without sharing the decentralized raw data stored on multiple devices to protect data privacy.
We propose a hierarchical synchronous FL framework, i.e., FedHiSyn, to tackle the problems of straggler effects and outdated models.
We evaluate the proposed framework based on MNIST, EMNIST, CIFAR10 and CIFAR100 datasets and diverse heterogeneous settings of devices.
- Score: 56.82825745165945
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) enables training a global model without sharing the
decentralized raw data stored on multiple devices to protect data privacy. Due
to the diverse capacity of the devices, FL frameworks struggle to tackle the
problems of straggler effects and outdated models. In addition, the data
heterogeneity incurs severe accuracy degradation of the global model in the FL
training process. To address aforementioned issues, we propose a hierarchical
synchronous FL framework, i.e., FedHiSyn. FedHiSyn first clusters all available
devices into a small number of categories based on their computing capacity.
After a certain interval of local training, the models trained in different
categories are simultaneously uploaded to a central server. Within a single
category, the devices communicate the local updated model weights to each other
based on a ring topology. As the efficiency of training in the ring topology
prefers devices with homogeneous resources, the classification based on the
computing capacity mitigates the impact of straggler effects. Besides, the
combination of the synchronous update of multiple categories and the device
communication within a single category help address the data heterogeneity
issue while achieving high accuracy. We evaluate the proposed framework based
on MNIST, EMNIST, CIFAR10 and CIFAR100 datasets and diverse heterogeneous
settings of devices. Experimental results show that FedHiSyn outperforms six
baseline methods, e.g., FedAvg, SCAFFOLD, and FedAT, in terms of training
accuracy and efficiency.
Related papers
- An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - CaBaFL: Asynchronous Federated Learning via Hierarchical Cache and Feature Balance [23.125185494897522]
Federated Learning (FL) as a promising distributed machine learning paradigm has been widely adopted in Artificial Intelligence of Things (AIoT) applications.
The efficiency and inference capability of FL is seriously limited due to the presence of stragglers and data imbalance across massive AIoT devices.
We present a novel FL approach named CaBaFL, which includes a hierarchical Cache-based aggregation mechanism and a feature Balance-guided device selection strategy.
arXiv Detail & Related papers (2024-04-19T12:39:11Z) - Stragglers-Aware Low-Latency Synchronous Federated Learning via Layer-Wise Model Updates [71.81037644563217]
Synchronous federated learning (FL) is a popular paradigm for collaborative edge learning.
As some of the devices may have limited computational resources and varying availability, FL latency is highly sensitive to stragglers.
We propose straggler-aware layer-wise federated learning (SALF) that leverages the optimization procedure of NNs via backpropagation to update the global model in a layer-wise fashion.
arXiv Detail & Related papers (2024-03-27T09:14:36Z) - HeteroSwitch: Characterizing and Taming System-Induced Data Heterogeneity in Federated Learning [36.00729012296371]
Federated Learning (FL) is a practical approach to train deep learning models collaboratively across user-end devices.
In FL, participating user-end devices are highly fragmented in terms of hardware and software configurations.
We propose HeteroSwitch, which adaptively adopts generalization techniques depending on the level of bias caused by varying HW and SW configurations.
arXiv Detail & Related papers (2024-03-07T04:23:07Z) - FLASH: Federated Learning Across Simultaneous Heterogeneities [54.80435317208111]
FLASH(Federated Learning Across Simultaneous Heterogeneities) is a lightweight and flexible client selection algorithm.
It outperforms state-of-the-art FL frameworks under extensive sources of Heterogeneities.
It achieves substantial and consistent improvements over state-of-the-art baselines.
arXiv Detail & Related papers (2024-02-13T20:04:39Z) - Supernet Training for Federated Image Classification under System
Heterogeneity [15.2292571922932]
In this work, we propose a novel framework to consider both scenarios, namely Federation of Supernet Training (FedSup)
It is inspired by how averaging parameters in the model aggregation stage of Federated Learning (FL) is similar to weight-sharing in supernet training.
Under our framework, we present an efficient algorithm (E-FedSup) by sending the sub-model to clients in the broadcast stage for reducing communication costs and training overhead.
arXiv Detail & Related papers (2022-06-03T02:21:01Z) - FedCAT: Towards Accurate Federated Learning via Device Concatenation [4.416919766772866]
Federated Learning (FL) enables all the involved devices to train a global model collaboratively without exposing their local data privacy.
For non-IID scenarios, the classification accuracy of FL models decreases drastically due to the weight divergence caused by data heterogeneity.
We introduce a novel FL approach named Fed-Cat that can achieve high model accuracy based on our proposed device selection strategy and device concatenation-based local training method.
arXiv Detail & Related papers (2022-02-23T10:08:43Z) - FedMix: Approximation of Mixup under Mean Augmented Federated Learning [60.503258658382]
Federated learning (FL) allows edge devices to collectively learn a model without directly sharing data within each device.
Current state-of-the-art algorithms suffer from performance degradation as the heterogeneity of local data across clients increases.
We propose a new augmentation algorithm, named FedMix, which is inspired by a phenomenal yet simple data augmentation method, Mixup.
arXiv Detail & Related papers (2021-07-01T06:14:51Z) - Rethinking Architecture Design for Tackling Data Heterogeneity in
Federated Learning [53.73083199055093]
We show that attention-based architectures (e.g., Transformers) are fairly robust to distribution shifts.
Our experiments show that replacing convolutional networks with Transformers can greatly reduce catastrophic forgetting of previous devices.
arXiv Detail & Related papers (2021-06-10T21:04:18Z) - Federated learning with class imbalance reduction [24.044750119251308]
Federated learning (FL) is a technique that enables a large amount of edge computing devices to collaboratively train a global learning model.
Due to privacy concerns, the raw data on devices could not be available for centralized server.
In this paper, an estimation scheme is designed to reveal the class distribution without the awareness of raw data.
arXiv Detail & Related papers (2020-11-23T08:13:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.