Federated Learning for Data and Model Heterogeneity in Medical Imaging
- URL: http://arxiv.org/abs/2308.00155v1
- Date: Mon, 31 Jul 2023 21:08:45 GMT
- Title: Federated Learning for Data and Model Heterogeneity in Medical Imaging
- Authors: Hussain Ahmad Madni, Rao Muhammad Umer and Gian Luca Foresti
- Abstract summary: Federated Learning (FL) is an evolving machine learning method in which multiple clients participate in collaborative learning without sharing their data with each other and the central server.
In real-world applications such as hospitals and industries, FL counters the challenges of data Heterogeneity and Model Heterogeneity.
We propose a method, MDH-FL (Exploiting Model and Data Heterogeneity in FL), to solve such problems.
- Score: 19.0931609571649
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) is an evolving machine learning method in which
multiple clients participate in collaborative learning without sharing their
data with each other and the central server. In real-world applications such as
hospitals and industries, FL counters the challenges of data heterogeneity and
model heterogeneity as an inevitable part of the collaborative training. More
specifically, different organizations, such as hospitals, have their own
private data and customized models for local training. To the best of our
knowledge, the existing methods do not effectively address both problems of
model heterogeneity and data heterogeneity in FL. In this paper, we exploit the
data and model heterogeneity simultaneously, and propose a method, MDH-FL
(Exploiting Model and Data Heterogeneity in FL) to solve such problems to
enhance the efficiency of the global model in FL. We use knowledge distillation
and a symmetric loss to minimize the heterogeneity and its impact on the model
performance. Knowledge distillation is used to solve the problem of model
heterogeneity, and symmetric loss tackles with the data and label
heterogeneity. We evaluate our method on the medical datasets to conform the
real-world scenario of hospitals, and compare with the existing methods. The
experimental results demonstrate the superiority of the proposed approach over
the other existing methods.
Related papers
- A Unified Solution to Diverse Heterogeneities in One-shot Federated Learning [14.466679488063217]
One-shot federated learning (FL) limits the communication between the server and clients to a single round.
We propose a unified, data-free, one-shot FL framework (FedHydra) that can effectively address both model and data heterogeneity.
arXiv Detail & Related papers (2024-10-28T15:20:52Z) - Addressing Heterogeneity in Federated Learning: Challenges and Solutions for a Shared Production Environment [1.2499537119440245]
Federated learning (FL) has emerged as a promising approach to training machine learning models across decentralized data sources.
This paper provides a comprehensive overview of data heterogeneity in FL within the context of manufacturing.
We discuss the impact of these types of heterogeneity on model training and review current methodologies for mitigating their adverse effects.
arXiv Detail & Related papers (2024-08-18T17:49:44Z) - Addressing Data Heterogeneity in Federated Learning of Cox Proportional Hazards Models [8.798959872821962]
This paper outlines an approach in the domain of federated survival analysis, specifically the Cox Proportional Hazards (CoxPH) model.
We present an FL approach that employs feature-based clustering to enhance model accuracy across synthetic datasets and real-world applications.
arXiv Detail & Related papers (2024-07-20T18:34:20Z) - Synthetic Data Aided Federated Learning Using Foundation Models [4.666380225768727]
We propose Differentially Private Synthetic Data Aided Federated Learning Using Foundation Models (DPSDA-FL)
Our experimental results have shown that DPSDA-FL can improve class recall and classification accuracy of the global model by up to 26% and 9%, respectively, in FL with Non-IID issues.
arXiv Detail & Related papers (2024-07-06T20:31:43Z) - Fake It Till Make It: Federated Learning with Consensus-Oriented
Generation [52.82176415223988]
We propose federated learning with consensus-oriented generation (FedCOG)
FedCOG consists of two key components at the client side: complementary data generation and knowledge-distillation-based model training.
Experiments on classical and real-world FL datasets show that FedCOG consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-12-10T18:49:59Z) - FedSym: Unleashing the Power of Entropy for Benchmarking the Algorithms
for Federated Learning [1.4656078321003647]
Federated learning (FL) is a decentralized machine learning approach where independent learners process data privately.
We study the currently popular data partitioning techniques and visualize their main disadvantages.
We propose a method that leverages entropy and symmetry to construct 'the most challenging' and controllable data distributions.
arXiv Detail & Related papers (2023-10-11T18:39:08Z) - Towards Understanding and Mitigating Dimensional Collapse in Heterogeneous Federated Learning [112.69497636932955]
Federated learning aims to train models across different clients without the sharing of data for privacy considerations.
We study how data heterogeneity affects the representations of the globally aggregated models.
We propose sc FedDecorr, a novel method that can effectively mitigate dimensional collapse in federated learning.
arXiv Detail & Related papers (2022-10-01T09:04:17Z) - Rethinking Data Heterogeneity in Federated Learning: Introducing a New
Notion and Standard Benchmarks [65.34113135080105]
We show that not only the issue of data heterogeneity in current setups is not necessarily a problem but also in fact it can be beneficial for the FL participants.
Our observations are intuitive.
Our code is available at https://github.com/MMorafah/FL-SC-NIID.
arXiv Detail & Related papers (2022-09-30T17:15:19Z) - Differentiable Agent-based Epidemiology [71.81552021144589]
We introduce GradABM: a scalable, differentiable design for agent-based modeling that is amenable to gradient-based learning with automatic differentiation.
GradABM can quickly simulate million-size populations in few seconds on commodity hardware, integrate with deep neural networks and ingest heterogeneous data sources.
arXiv Detail & Related papers (2022-07-20T07:32:02Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Adversarial Sample Enhanced Domain Adaptation: A Case Study on
Predictive Modeling with Electronic Health Records [57.75125067744978]
We propose a data augmentation method to facilitate domain adaptation.
adversarially generated samples are used during domain adaptation.
Results confirm the effectiveness of our method and the generality on different tasks.
arXiv Detail & Related papers (2021-01-13T03:20:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.