FedDropoutAvg: Generalizable federated learning for histopathology image
classification
- URL: http://arxiv.org/abs/2111.13230v1
- Date: Thu, 25 Nov 2021 19:30:37 GMT
- Title: FedDropoutAvg: Generalizable federated learning for histopathology image
classification
- Authors: Gozde N. Gunesli, Mohsin Bilal, Shan E Ahmed Raza, and Nasir M.
Rajpoot
- Abstract summary: Federated learning (FL) enables collaborative learning of a deep learning model without sharing the data of participating sites.
We propose FedDropoutAvg, a new federated learning approach for training a generalizable model.
We show that the proposed approach is more generalizable than other state-of-the-art federated training approaches.
- Score: 11.509801043891837
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) enables collaborative learning of a deep learning
model without sharing the data of participating sites. FL in medical image
analysis tasks is relatively new and open for enhancements. In this study, we
propose FedDropoutAvg, a new federated learning approach for training a
generalizable model. The proposed method takes advantage of randomness, both in
client selection and also in federated averaging process. We compare
FedDropoutAvg to several algorithms in an FL scenario for real-world multi-site
histopathology image classification task. We show that with FedDropoutAvg, the
final model can achieve performance better than other FL approaches and closer
to a classical deep learning model that requires all data to be shared for
centralized training. We test the trained models on a large dataset consisting
of 1.2 million image tiles from 21 different centers. To evaluate the
generalization ability of the proposed approach, we use held-out test sets from
centers whose data was used in the FL and for unseen data from other
independent centers whose data was not used in the federated training. We show
that the proposed approach is more generalizable than other state-of-the-art
federated training approaches. To the best of our knowledge, ours is the first
study to use a randomized client and local model parameter selection procedure
in a federated setting for a medical image analysis task.
Related papers
- Learn What You Need in Personalized Federated Learning [53.83081622573734]
$textitLearn2pFed$ is a novel algorithm-unrolling-based personalized federated learning framework.
We show that $textitLearn2pFed$ significantly outperforms previous personalized federated learning methods.
arXiv Detail & Related papers (2024-01-16T12:45:15Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - FedSoup: Improving Generalization and Personalization in Federated
Learning via Selective Model Interpolation [32.36334319329364]
Cross-silo federated learning (FL) enables the development of machine learning models on datasets distributed across data centers.
Recent research has found that current FL algorithms face a trade-off between local and global performance when confronted with distribution shifts.
We propose a novel federated model soup method to optimize the trade-off between local and global performance.
arXiv Detail & Related papers (2023-07-20T00:07:29Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv Detail & Related papers (2022-07-20T04:55:18Z) - Adaptive Personlization in Federated Learning for Highly Non-i.i.d. Data [37.667379000751325]
Federated learning (FL) is a distributed learning method that offers medical institutes the prospect of collaboration in a global model.
In this work, we investigate an adaptive hierarchical clustering method for FL to produce intermediate semi-global models.
Our experiments demonstrate significant performance gain in heterogeneous distribution compared to standard FL methods in classification accuracy.
arXiv Detail & Related papers (2022-07-07T17:25:04Z) - Federated Cross Learning for Medical Image Segmentation [23.075410916203005]
Federated learning (FL) can collaboratively train deep learning models using isolated patient data owned by different hospitals for various clinical applications.
A major problem of FL is its performance degradation when dealing with data that are not independently and identically distributed (non-iid)
arXiv Detail & Related papers (2022-04-05T18:55:02Z) - Closing the Generalization Gap of Cross-silo Federated Medical Image
Segmentation [66.44449514373746]
Cross-silo federated learning (FL) has attracted much attention in medical imaging analysis with deep learning in recent years.
There can be a gap between the model trained from FL and one from centralized training.
We propose a novel training framework FedSM to avoid client issue and successfully close the drift gap.
arXiv Detail & Related papers (2022-03-18T19:50:07Z) - An Expectation-Maximization Perspective on Federated Learning [75.67515842938299]
Federated learning describes the distributed training of models across multiple clients while keeping the data private on-device.
In this work, we view the server-orchestrated federated learning process as a hierarchical latent variable model where the server provides the parameters of a prior distribution over the client-specific model parameters.
We show that with simple Gaussian priors and a hard version of the well known Expectation-Maximization (EM) algorithm, learning in such a model corresponds to FedAvg, the most popular algorithm for the federated learning setting.
arXiv Detail & Related papers (2021-11-19T12:58:59Z) - Adapt to Adaptation: Learning Personalization for Cross-Silo Federated
Learning [6.0088002781256185]
Conventional federated learning aims to train a global model for a federation of clients with decentralized data.
The distribution shift across non-IID datasets, also known as the data heterogeneity, often poses a challenge for this one-global-model-fits-all solution.
We propose APPLE, a personalized cross-silo FL framework that adaptively learns how much each client can benefit from other clients' models.
arXiv Detail & Related papers (2021-10-15T22:23:14Z) - Federated Multi-Task Learning under a Mixture of Distributions [10.00087964926414]
Federated Learning (FL) is a framework for on-device collaborative training of machine learning models.
First efforts in FL focused on learning a single global model with good average performance across clients, but the global model may be arbitrarily bad for a given client.
We study federated MTL under the flexible assumption that each local data distribution is a mixture of unknown underlying distributions.
arXiv Detail & Related papers (2021-08-23T15:47:53Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.