IOP-FL: Inside-Outside Personalization for Federated Medical Image
Segmentation
- URL: http://arxiv.org/abs/2204.08467v2
- Date: Wed, 29 Mar 2023 07:25:54 GMT
- Title: IOP-FL: Inside-Outside Personalization for Federated Medical Image
Segmentation
- Authors: Meirui Jiang, Hongzheng Yang, Chen Cheng, Qi Dou
- Abstract summary: Federated learning allows multiple medical institutions to collaboratively learn a global model without centralizing client data.
We propose a novel unified framework for both textitInside and Outside model Personalization in FL (IOP-FL)
Our experimental results on two medical image segmentation tasks present significant improvements over SOTA methods on both inside and outside personalization.
- Score: 18.65229252289727
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) allows multiple medical institutions to
collaboratively learn a global model without centralizing client data. It is
difficult, if possible at all, for such a global model to commonly achieve
optimal performance for each individual client, due to the heterogeneity of
medical images from various scanners and patient demographics. This problem
becomes even more significant when deploying the global model to unseen clients
outside the FL with unseen distributions not presented during federated
training. To optimize the prediction accuracy of each individual client for
medical imaging tasks, we propose a novel unified framework for both
\textit{Inside and Outside model Personalization in FL} (IOP-FL). Our inside
personalization uses a lightweight gradient-based approach that exploits the
local adapted model for each client, by accumulating both the global gradients
for common knowledge and the local gradients for client-specific optimization.
Moreover, and importantly, the obtained local personalized models and the
global model can form a diverse and informative routing space to personalize an
adapted model for outside FL clients. Hence, we design a new test-time routing
scheme using the consistency loss with a shape constraint to dynamically
incorporate the models, given the distribution information conveyed by the test
data. Our extensive experimental results on two medical image segmentation
tasks present significant improvements over SOTA methods on both inside and
outside personalization, demonstrating the potential of our IOP-FL scheme for
clinical practice.
Related papers
- MAP: Model Aggregation and Personalization in Federated Learning with Incomplete Classes [49.22075916259368]
In some real-world applications, data samples are usually distributed on local devices.
In this paper, we focus on a special kind of Non-I.I.D. scene where clients own incomplete classes.
Our proposed algorithm named MAP could simultaneously achieve the aggregation and personalization goals in FL.
arXiv Detail & Related papers (2024-04-14T12:22:42Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - FedJETs: Efficient Just-In-Time Personalization with Federated Mixture
of Experts [48.78037006856208]
FedJETs is a novel solution by using a Mixture-of-Experts (MoE) framework within a Federated Learning (FL) setup.
Our method leverages the diversity of the clients to train specialized experts on different subsets of classes, and a gating function to route the input to the most relevant expert(s)
Our approach can improve accuracy up to 18% in state of the art FL settings, while maintaining competitive zero-shot performance.
arXiv Detail & Related papers (2023-06-14T15:47:52Z) - Visual Prompt Based Personalized Federated Learning [83.04104655903846]
We propose a novel PFL framework for image classification tasks, dubbed pFedPT, that leverages personalized visual prompts to implicitly represent local data distribution information of clients.
Experiments on the CIFAR10 and CIFAR100 datasets show that pFedPT outperforms several state-of-the-art (SOTA) PFL algorithms by a large margin in various settings.
arXiv Detail & Related papers (2023-03-15T15:02:15Z) - Personalizing or Not: Dynamically Personalized Federated Learning with
Incentives [37.42347737911428]
We propose personalized federated learning (FL) for learning personalized models without sharing private data.
We introduce the personalization rate, measured as the fraction of clients willing to train personalized models, into federated settings and propose DyPFL.
This technique incentivizes clients to participate in personalizing local models while allowing the adoption of the global model when it performs better.
arXiv Detail & Related papers (2022-08-12T09:51:20Z) - Adapt to Adaptation: Learning Personalization for Cross-Silo Federated
Learning [6.0088002781256185]
Conventional federated learning aims to train a global model for a federation of clients with decentralized data.
The distribution shift across non-IID datasets, also known as the data heterogeneity, often poses a challenge for this one-global-model-fits-all solution.
We propose APPLE, a personalized cross-silo FL framework that adaptively learns how much each client can benefit from other clients' models.
arXiv Detail & Related papers (2021-10-15T22:23:14Z) - Personalized Retrogress-Resilient Framework for Real-World Medical
Federated Learning [8.240098954377794]
We propose a personalized retrogress-resilient framework to produce a superior personalized model for each client.
Our experiments on real-world dermoscopic FL dataset prove that our personalized retrogress-resilient framework outperforms state-of-the-art FL methods.
arXiv Detail & Related papers (2021-10-01T13:24:29Z) - Federated Multi-Task Learning under a Mixture of Distributions [10.00087964926414]
Federated Learning (FL) is a framework for on-device collaborative training of machine learning models.
First efforts in FL focused on learning a single global model with good average performance across clients, but the global model may be arbitrarily bad for a given client.
We study federated MTL under the flexible assumption that each local data distribution is a mixture of unknown underlying distributions.
arXiv Detail & Related papers (2021-08-23T15:47:53Z) - Federated Whole Prostate Segmentation in MRI with Personalized Neural
Architectures [11.563695244722613]
Federated learning (FL) is a way to train machine learning models without the need for centralized datasets.
In this work, we combine FL with an AutoML technique based on local neural architecture search by training a "supernet"
The proposed method is evaluated on four different datasets from 3D prostate MRI and shown to improve the local models' performance after adaptation.
arXiv Detail & Related papers (2021-07-16T20:35:29Z) - Personalized Federated Learning with First Order Model Optimization [76.81546598985159]
We propose an alternative to federated learning, where each client federates with other relevant clients to obtain a stronger model per client-specific objectives.
We do not assume knowledge of underlying data distributions or client similarities, and allow each client to optimize for arbitrary target distributions of interest.
Our method outperforms existing alternatives, while also enabling new features for personalized FL such as transfer outside of local data distributions.
arXiv Detail & Related papers (2020-12-15T19:30:29Z) - Federated Mutual Learning [65.46254760557073]
Federated Mutual Leaning (FML) allows clients training a generalized model collaboratively and a personalized model independently.
The experiments show that FML can achieve better performance than alternatives in typical Federated learning setting.
arXiv Detail & Related papers (2020-06-27T09:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.