Where to Begin? From Random to Foundation Model Instructed
Initialization in Federated Learning for Medical Image Segmentation
- URL: http://arxiv.org/abs/2311.15463v1
- Date: Mon, 27 Nov 2023 00:29:10 GMT
- Title: Where to Begin? From Random to Foundation Model Instructed
Initialization in Federated Learning for Medical Image Segmentation
- Authors: Ming Li, Guang Yang
- Abstract summary: In medical image analysis, Federated Learning (FL) is a key technology that enables privacy-preserved, decentralized data processing.
We propose a novel perspective: exploring the impact of using the foundation model with enormous pre-trained knowledge.
- Score: 11.412151951949102
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In medical image analysis, Federated Learning (FL) stands out as a key
technology that enables privacy-preserved, decentralized data processing,
crucial for handling sensitive medical data. Currently, most FL models employ
random initialization, which has been proven effective in various instances.
However, given the unique challenges posed by non-IID (independently and
identically distributed) data in FL, we propose a novel perspective: exploring
the impact of using the foundation model with enormous pre-trained knowledge,
such as the Segment Anything Model (SAM), as an instructive teacher for FL
model initialization in medical image segmentation task. This work for the
first time attempts to utilize the foundation model as an instructive teacher
for initialization in FL, assessing its impact on the performance of FL models,
especially in non-IID data scenarios. Our empirical evaluation on chest x-ray
lung segmentation showcases that FL with foundation model instructed
initialization not only achieves faster convergence but also improves
performance in complex data contexts. These findings offer a new perspective
for model initialization in FL.
Related papers
- Unleashing the Potential of the Diffusion Model in Few-shot Semantic Segmentation [56.87049651707208]
Few-shot Semantic has evolved into In-context tasks, morphing into a crucial element in assessing generalist segmentation models.
Our initial focus lies in understanding how to facilitate interaction between the query image and the support image, resulting in the proposal of a KV fusion method within the self-attention framework.
Based on our analysis, we establish a simple and effective framework named DiffewS, maximally retaining the original Latent Diffusion Model's generative framework.
arXiv Detail & Related papers (2024-10-03T10:33:49Z) - High-Performance Few-Shot Segmentation with Foundation Models: An Empirical Study [64.06777376676513]
We develop a few-shot segmentation (FSS) framework based on foundation models.
To be specific, we propose a simple approach to extract implicit knowledge from foundation models to construct coarse correspondence.
Experiments on two widely used datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-09-10T08:04:11Z) - A Survey on Efficient Federated Learning Methods for Foundation Model Training [62.473245910234304]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - A Comprehensive Study on Model Initialization Techniques Ensuring
Efficient Federated Learning [0.0]
Federated learning(FL) has emerged as a promising paradigm for training machine learning models in a distributed and privacy-preserving manner.
The choice of methods used for models plays a crucial role in the performance, convergence speed, communication efficiency, privacy guarantees of federated learning systems.
Our research meticulously compares, categorizes, and delineates the merits and demerits of each technique, examining their applicability across diverse FL scenarios.
arXiv Detail & Related papers (2023-10-31T23:26:58Z) - A Comprehensive View of Personalized Federated Learning on Heterogeneous Clinical Datasets [0.4926316920996346]
Federated learning (FL) is a key approach to overcoming the data silos that so frequently obstruct the training and deployment of machine-learning models in clinical settings.
This work contributes to a growing body of FL research specifically focused on clinical applications along three important directions.
arXiv Detail & Related papers (2023-09-28T20:12:17Z) - Universal Domain Adaptation from Foundation Models: A Baseline Study [58.51162198585434]
We make empirical studies of state-of-the-art UniDA methods using foundation models.
We introduce textitCLIP distillation, a parameter-free method specifically designed to distill target knowledge from CLIP models.
Although simple, our method outperforms previous approaches in most benchmark tasks.
arXiv Detail & Related papers (2023-05-18T16:28:29Z) - On the Importance and Applicability of Pre-Training for Federated
Learning [28.238484580662785]
We conduct a systematic study to explore pre-training for federated learning.
We find that pre-training can improve FL, but also close its accuracy gap to the counterpart centralized learning.
We conclude our paper with an attempt to understand the effect of pre-training on FL.
arXiv Detail & Related papers (2022-06-23T06:02:33Z) - ST-FL: Style Transfer Preprocessing in Federated Learning for COVID-19
Segmentation [1.6799377888527687]
We propose a GAN-augmented federated learning model, dubbed ST-FL (Style Transfer Federated Learning), for COVID-19 image segmentation.
We demonstrate that the widely varying data quality on FL client nodes leads to a sub-optimal centralised FL model for COVID-19 chest CT image segmentation.
arXiv Detail & Related papers (2022-03-25T14:33:02Z) - Closing the Generalization Gap of Cross-silo Federated Medical Image
Segmentation [66.44449514373746]
Cross-silo federated learning (FL) has attracted much attention in medical imaging analysis with deep learning in recent years.
There can be a gap between the model trained from FL and one from centralized training.
We propose a novel training framework FedSM to avoid client issue and successfully close the drift gap.
arXiv Detail & Related papers (2022-03-18T19:50:07Z) - Differentially private federated deep learning for multi-site medical
image segmentation [56.30543374146002]
Collaborative machine learning techniques such as federated learning (FL) enable the training of models on effectively larger datasets without data transfer.
Recent initiatives have demonstrated that segmentation models trained with FL can achieve performance similar to locally trained models.
However, FL is not a fully privacy-preserving technique and privacy-centred attacks can disclose confidential patient data.
arXiv Detail & Related papers (2021-07-06T12:57:32Z) - Prototype Guided Federated Learning of Visual Feature Representations [15.021124010665194]
Federated Learning (FL) is a framework which enables distributed model training using a large corpus of decentralized training data.
Existing methods aggregate models disregarding their internal representations, which are crucial for training models in vision tasks.
We introduce FedProto, which computes client deviations using margins of representations learned on distributed data.
arXiv Detail & Related papers (2021-05-19T08:29:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.