FedEntropy: Efficient Device Grouping for Federated Learning Using
Maximum Entropy Judgment
- URL: http://arxiv.org/abs/2205.12038v1
- Date: Tue, 24 May 2022 12:45:17 GMT
- Title: FedEntropy: Efficient Device Grouping for Federated Learning Using
Maximum Entropy Judgment
- Authors: Zhiwei Ling, Zhihao Yue, Jun Xia, Ming Hu, Ting Wang, Mingsong Chen
- Abstract summary: Federated Learning (FL) has attracted steadily increasing attentions as a promising distributed machine learning paradigm.
FL inherently suffers from low classification accuracy in non-IID scenarios.
We present an effective FL method named FedEntropy with a novel dynamic device grouping scheme.
- Score: 10.507028041279048
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Along with the popularity of Artificial Intelligence (AI) and
Internet-of-Things (IoT), Federated Learning (FL) has attracted steadily
increasing attentions as a promising distributed machine learning paradigm,
which enables the training of a central model on for numerous decentralized
devices without exposing their privacy. However, due to the biased data
distributions on involved devices, FL inherently suffers from low
classification accuracy in non-IID scenarios. Although various device grouping
method have been proposed to address this problem, most of them neglect both i)
distinct data distribution characteristics of heterogeneous devices, and ii)
contributions and hazards of local models, which are extremely important in
determining the quality of global model aggregation. In this paper, we present
an effective FL method named FedEntropy with a novel dynamic device grouping
scheme, which makes full use of the above two factors based on our proposed
maximum entropy judgement heuristic.Unlike existing FL methods that directly
aggregate local models returned from all the selected devices, in one FL round
FedEntropy firstly makes a judgement based on the pre-collected soft labels of
selected devices and then only aggregates the local models that can maximize
the overall entropy of these soft labels. Without collecting local models that
are harmful for aggregation, FedEntropy can effectively improve global model
accuracy while reducing the overall communication overhead. Comprehensive
experimental results on well-known benchmarks show that, FedEntropy not only
outperforms state-of-the-art FL methods in terms of model accuracy and
communication overhead, but also can be integrated into them to enhance their
classification performance.
Related papers
- Client Contribution Normalization for Enhanced Federated Learning [4.726250115737579]
Mobile devices, including smartphones and laptops, generate decentralized and heterogeneous data.
Federated Learning (FL) offers a promising alternative by enabling collaborative training of a global model across decentralized devices without data sharing.
This paper focuses on data-dependent heterogeneity in FL and proposes a novel approach leveraging mean latent representations extracted from locally trained models.
arXiv Detail & Related papers (2024-11-10T04:03:09Z) - Stragglers-Aware Low-Latency Synchronous Federated Learning via Layer-Wise Model Updates [71.81037644563217]
Synchronous federated learning (FL) is a popular paradigm for collaborative edge learning.
As some of the devices may have limited computational resources and varying availability, FL latency is highly sensitive to stragglers.
We propose straggler-aware layer-wise federated learning (SALF) that leverages the optimization procedure of NNs via backpropagation to update the global model in a layer-wise fashion.
arXiv Detail & Related papers (2024-03-27T09:14:36Z) - Straggler-resilient Federated Learning: Tackling Computation
Heterogeneity with Layer-wise Partial Model Training in Mobile Edge Network [4.1813760301635705]
We propose Federated Partial Model Training (FedPMT), where devices with smaller computational capabilities work on partial models and contribute to the global model.
As such, all devices in FedPMT prioritize the most crucial parts of the global model.
Empirical results show that FedPMT significantly outperforms the existing benchmark FedDrop.
arXiv Detail & Related papers (2023-11-16T16:30:04Z) - Filling the Missing: Exploring Generative AI for Enhanced Federated
Learning over Heterogeneous Mobile Edge Devices [72.61177465035031]
We propose a generative AI-empowered federated learning to address these challenges by leveraging the idea of FIlling the MIssing (FIMI) portion of local data.
Experiment results demonstrate that FIMI can save up to 50% of the device-side energy to achieve the target global test accuracy.
arXiv Detail & Related papers (2023-10-21T12:07:04Z) - FedHiSyn: A Hierarchical Synchronous Federated Learning Framework for
Resource and Data Heterogeneity [56.82825745165945]
Federated Learning (FL) enables training a global model without sharing the decentralized raw data stored on multiple devices to protect data privacy.
We propose a hierarchical synchronous FL framework, i.e., FedHiSyn, to tackle the problems of straggler effects and outdated models.
We evaluate the proposed framework based on MNIST, EMNIST, CIFAR10 and CIFAR100 datasets and diverse heterogeneous settings of devices.
arXiv Detail & Related papers (2022-06-21T17:23:06Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - FedCAT: Towards Accurate Federated Learning via Device Concatenation [4.416919766772866]
Federated Learning (FL) enables all the involved devices to train a global model collaboratively without exposing their local data privacy.
For non-IID scenarios, the classification accuracy of FL models decreases drastically due to the weight divergence caused by data heterogeneity.
We introduce a novel FL approach named Fed-Cat that can achieve high model accuracy based on our proposed device selection strategy and device concatenation-based local training method.
arXiv Detail & Related papers (2022-02-23T10:08:43Z) - FedHM: Efficient Federated Learning for Heterogeneous Models via
Low-rank Factorization [16.704006420306353]
A scalable federated learning framework should address heterogeneous clients equipped with different computation and communication capabilities.
This paper proposes FedHM, a novel federated model compression framework that distributes the heterogeneous low-rank models to clients and then aggregates them into a global full-rank model.
Our solution enables the training of heterogeneous local models with varying computational complexities and aggregates a single global model.
arXiv Detail & Related papers (2021-11-29T16:11:09Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Gradual Federated Learning with Simulated Annealing [26.956032164461377]
Federated averaging (FedAvg) is a popular federated learning (FL) technique that updates the global model by averaging local models.
In this paper, we propose a new FL technique based on simulated annealing.
We show that SAFL outperforms the conventional FedAvg technique in terms of the convergence speed and the classification accuracy.
arXiv Detail & Related papers (2021-10-11T11:57:56Z) - Federated Learning With Quantized Global Model Updates [84.55126371346452]
We study federated learning, which enables mobile devices to utilize their local datasets to train a global model.
We introduce a lossy FL (LFL) algorithm, in which both the global model and the local model updates are quantized before being transmitted.
arXiv Detail & Related papers (2020-06-18T16:55:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.