FEDIC: Federated Learning on Non-IID and Long-Tailed Data via Calibrated
Distillation
- URL: http://arxiv.org/abs/2205.00172v1
- Date: Sat, 30 Apr 2022 06:17:36 GMT
- Title: FEDIC: Federated Learning on Non-IID and Long-Tailed Data via Calibrated
Distillation
- Authors: Xinyi Shang, Yang Lu, Yiu-ming Cheung, Hanzi Wang
- Abstract summary: Dealing with non-IID data is one of the most challenging problems for federated learning.
This paper studies the joint problem of non-IID and long-tailed data in federated learning and proposes a corresponding solution called Federated Ensemble Distillation with Imbalance (FEDIC)
FEDIC uses model ensemble to take advantage of the diversity of models trained on non-IID data.
- Score: 54.2658887073461
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning provides a privacy guarantee for generating good deep
learning models on distributed clients with different kinds of data.
Nevertheless, dealing with non-IID data is one of the most challenging problems
for federated learning. Researchers have proposed a variety of methods to
eliminate the negative influence of non-IIDness. However, they only focus on
the non-IID data provided that the universal class distribution is balanced. In
many real-world applications, the universal class distribution is long-tailed,
which causes the model seriously biased. Therefore, this paper studies the
joint problem of non-IID and long-tailed data in federated learning and
proposes a corresponding solution called Federated Ensemble Distillation with
Imbalance Calibration (FEDIC). To deal with non-IID data, FEDIC uses model
ensemble to take advantage of the diversity of models trained on non-IID data.
Then, a new distillation method with logit adjustment and calibration gating
network is proposed to solve the long-tail problem effectively. We evaluate
FEDIC on CIFAR-10-LT, CIFAR-100-LT, and ImageNet-LT with a highly non-IID
experimental setting, in comparison with the state-of-the-art methods of
federated learning and long-tail learning. Our code is available at
https://github.com/shangxinyi/FEDIC.
Related papers
- FedLF: Adaptive Logit Adjustment and Feature Optimization in Federated Long-Tailed Learning [5.23984567704876]
Federated learning offers a paradigm to the challenge of preserving privacy in distributed machine learning.
Traditional approach fails to address the phenomenon of class-wise bias in global long-tailed data.
New method FedLF introduces three modifications in the local training phase: adaptive logit adjustment, continuous class centred optimization, and feature decorrelation.
arXiv Detail & Related papers (2024-09-18T16:25:29Z) - Contrastive Federated Learning with Tabular Data Silos [9.516897428263146]
We propose Contrastive Federated Learning with Data Silos (CFL) as a solution for learning from data silos.
CFL outperforms current methods in addressing these challenges and providing improvements in accuracy.
We present positive results that showcase the advantages of our contrastive federated learning approach in complex client environments.
arXiv Detail & Related papers (2024-09-10T00:24:59Z) - MultiConfederated Learning: Inclusive Non-IID Data handling with Decentralized Federated Learning [1.2726316791083532]
Federated Learning (FL) has emerged as a prominent privacy-preserving technique for enabling use cases like confidential clinical machine learning.
FL operates by aggregating models trained by remote devices which owns the data.
We propose MultiConfederated Learning: a decentralized FL framework which is designed to handle non-IID data.
arXiv Detail & Related papers (2024-04-20T16:38:26Z) - Learning From Drift: Federated Learning on Non-IID Data via Drift
Regularization [11.813552364878868]
Federated learning algorithms perform reasonably well on independent and identically distributed (IID) data.
They suffer greatly from heterogeneous environments, i.e., Non-IID data.
We propose Learning from Drift (LfD), a novel method for effectively training the model in heterogeneous settings.
arXiv Detail & Related papers (2023-09-13T09:23:09Z) - Rethinking Data Heterogeneity in Federated Learning: Introducing a New
Notion and Standard Benchmarks [65.34113135080105]
We show that not only the issue of data heterogeneity in current setups is not necessarily a problem but also in fact it can be beneficial for the FL participants.
Our observations are intuitive.
Our code is available at https://github.com/MMorafah/FL-SC-NIID.
arXiv Detail & Related papers (2022-09-30T17:15:19Z) - FedDRL: Deep Reinforcement Learning-based Adaptive Aggregation for
Non-IID Data in Federated Learning [4.02923738318937]
Uneven distribution of local data across different edge devices (clients) results in slow model training and accuracy reduction in federated learning.
This work introduces a novel non-IID type encountered in real-world datasets, namely cluster-skew.
We propose FedDRL, a novel FL model that employs deep reinforcement learning to adaptively determine each client's impact factor.
arXiv Detail & Related papers (2022-08-04T04:24:16Z) - Towards Federated Long-Tailed Learning [76.50892783088702]
Data privacy and class imbalance are the norm rather than the exception in many machine learning tasks.
Recent attempts have been launched to, on one side, address the problem of learning from pervasive private data, and on the other side, learn from long-tailed data.
This paper focuses on learning with long-tailed (LT) data distributions under the context of the popular privacy-preserved federated learning (FL) framework.
arXiv Detail & Related papers (2022-06-30T02:34:22Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Non-IID data and Continual Learning processes in Federated Learning: A
long road ahead [58.720142291102135]
Federated Learning is a novel framework that allows multiple devices or institutions to train a machine learning model collaboratively while preserving their data private.
In this work, we formally classify data statistical heterogeneity and review the most remarkable learning strategies that are able to face it.
At the same time, we introduce approaches from other machine learning frameworks, such as Continual Learning, that also deal with data heterogeneity and could be easily adapted to the Federated Learning settings.
arXiv Detail & Related papers (2021-11-26T09:57:11Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.