Adaptive Personlization in Federated Learning for Highly Non-i.i.d. Data
- URL: http://arxiv.org/abs/2207.03448v1
- Date: Thu, 7 Jul 2022 17:25:04 GMT
- Title: Adaptive Personlization in Federated Learning for Highly Non-i.i.d. Data
- Authors: Yousef Yeganeh, Azade Farshad, Johann Boschmann, Richard Gaus,
Maximilian Frantzen, Nassir Navab
- Abstract summary: Federated learning (FL) is a distributed learning method that offers medical institutes the prospect of collaboration in a global model.
In this work, we investigate an adaptive hierarchical clustering method for FL to produce intermediate semi-global models.
Our experiments demonstrate significant performance gain in heterogeneous distribution compared to standard FL methods in classification accuracy.
- Score: 37.667379000751325
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is a distributed learning method that offers medical
institutes the prospect of collaboration in a global model while preserving the
privacy of their patients. Although most medical centers conduct similar
medical imaging tasks, their differences, such as specializations, number of
patients, and devices, lead to distinctive data distributions. Data
heterogeneity poses a challenge for FL and the personalization of the local
models. In this work, we investigate an adaptive hierarchical clustering method
for FL to produce intermediate semi-global models, so clients with similar data
distribution have the chance of forming a more specialized model. Our method
forms several clusters consisting of clients with the most similar data
distributions; then, each cluster continues to train separately. Inside the
cluster, we use meta-learning to improve the personalization of the
participants' models. We compare the clustering approach with classical FedAvg
and centralized training by evaluating our proposed methods on the HAM10k
dataset for skin lesion classification with extreme heterogeneous data
distribution. Our experiments demonstrate significant performance gain in
heterogeneous distribution compared to standard FL methods in classification
accuracy. Moreover, we show that the models converge faster if applied in
clusters and outperform centralized training while using only a small subset of
data.
Related papers
- FedClust: Tackling Data Heterogeneity in Federated Learning through Weight-Driven Client Clustering [26.478852701376294]
Federated learning (FL) is an emerging distributed machine learning paradigm.
One of the major challenges in FL is the presence of uneven data distributions across client devices.
We propose em FedClust, a novel approach for CFL that leverages the correlation between local model weights and the data distribution of clients.
arXiv Detail & Related papers (2024-07-09T02:47:16Z) - Contrastive encoder pre-training-based clustered federated learning for
heterogeneous data [17.580390632874046]
Federated learning (FL) enables distributed clients to collaboratively train a global model while preserving their data privacy.
We propose contrastive pre-training-based clustered federated learning (CP-CFL) to improve the model convergence and overall performance of FL systems.
arXiv Detail & Related papers (2023-11-28T05:44:26Z) - PFL-GAN: When Client Heterogeneity Meets Generative Models in
Personalized Federated Learning [55.930403371398114]
We propose a novel generative adversarial network (GAN) sharing and aggregation strategy for personalized learning (PFL)
PFL-GAN addresses the client heterogeneity in different scenarios. More specially, we first learn the similarity among clients and then develop an weighted collaborative data aggregation.
The empirical results through the rigorous experimentation on several well-known datasets demonstrate the effectiveness of PFL-GAN.
arXiv Detail & Related papers (2023-08-23T22:38:35Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - Tackling Computational Heterogeneity in FL: A Few Theoretical Insights [68.8204255655161]
We introduce and analyse a novel aggregation framework that allows for formalizing and tackling computational heterogeneous data.
Proposed aggregation algorithms are extensively analyzed from a theoretical, and an experimental prospective.
arXiv Detail & Related papers (2023-07-12T16:28:21Z) - Integrating Local Real Data with Global Gradient Prototypes for
Classifier Re-Balancing in Federated Long-Tailed Learning [60.41501515192088]
Federated Learning (FL) has become a popular distributed learning paradigm that involves multiple clients training a global model collaboratively.
The data samples usually follow a long-tailed distribution in the real world, and FL on the decentralized and long-tailed data yields a poorly-behaved global model.
In this work, we integrate the local real data with the global gradient prototypes to form the local balanced datasets.
arXiv Detail & Related papers (2023-01-25T03:18:10Z) - From Isolation to Collaboration: Federated Class-Heterogeneous Learning for Chest X-Ray Classification [4.0907576027258985]
Federated learning is a promising paradigm to collaboratively train a global chest x-ray (CXR) classification model.
We propose surgical aggregation, a FL method that uses selective aggregation to collaboratively train a global model.
Our results show that our method outperforms current methods and has better generalizability.
arXiv Detail & Related papers (2023-01-17T03:53:29Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv Detail & Related papers (2022-07-20T04:55:18Z) - FedSLD: Federated Learning with Shared Label Distribution for Medical
Image Classification [6.0088002781256185]
We propose Federated Learning with Shared Label Distribution (FedSLD) for classification tasks.
FedSLD adjusts the contribution of each data sample to the local objective during optimization given knowledge of the distribution.
Our results show that FedSLD achieves better convergence performance than the compared leading FL optimization algorithms.
arXiv Detail & Related papers (2021-10-15T21:38:25Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z) - Federated learning with hierarchical clustering of local updates to
improve training on non-IID data [3.3517146652431378]
We show that learning a single joint model is often not optimal in the presence of certain types of non-iid data.
We present a modification to FL by introducing a hierarchical clustering step (FL+HC)
We show how FL+HC allows model training to converge in fewer communication rounds compared to FL without clustering.
arXiv Detail & Related papers (2020-04-24T15:16:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.