ST-FL: Style Transfer Preprocessing in Federated Learning for COVID-19
Segmentation
- URL: http://arxiv.org/abs/2203.13680v1
- Date: Fri, 25 Mar 2022 14:33:02 GMT
- Title: ST-FL: Style Transfer Preprocessing in Federated Learning for COVID-19
Segmentation
- Authors: Antonios Georgiadis, Varun Babbar, Fran Silavong, Sean Moran, Rob
Otter
- Abstract summary: We propose a GAN-augmented federated learning model, dubbed ST-FL (Style Transfer Federated Learning), for COVID-19 image segmentation.
We demonstrate that the widely varying data quality on FL client nodes leads to a sub-optimal centralised FL model for COVID-19 chest CT image segmentation.
- Score: 1.6799377888527687
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Chest Computational Tomography (CT) scans present low cost, speed and
objectivity for COVID-19 diagnosis and deep learning methods have shown great
promise in assisting the analysis and interpretation of these images. Most
hospitals or countries can train their own models using in-house data, however
empirical evidence shows that those models perform poorly when tested on new
unseen cases, surfacing the need for coordinated global collaboration. Due to
privacy regulations, medical data sharing between hospitals and nations is
extremely difficult. We propose a GAN-augmented federated learning model,
dubbed ST-FL (Style Transfer Federated Learning), for COVID-19 image
segmentation. Federated learning (FL) permits a centralised model to be learned
in a secure manner from heterogeneous datasets located in disparate private
data silos. We demonstrate that the widely varying data quality on FL client
nodes leads to a sub-optimal centralised FL model for COVID-19 chest CT image
segmentation. ST-FL is a novel FL framework that is robust in the face of
highly variable data quality at client nodes. The robustness is achieved by a
denoising CycleGAN model at each client of the federation that maps arbitrary
quality images into the same target quality, counteracting the severe data
variability evident in real-world FL use-cases. Each client is provided with
the target style, which is the same for all clients, and trains their own
denoiser. Our qualitative and quantitative results suggest that this FL model
performs comparably to, and in some cases better than, a model that has
centralised access to all the training data.
Related papers
- FACMIC: Federated Adaptative CLIP Model for Medical Image Classification [12.166024140377337]
We introduce a federated adaptive Contrastive Language Image Pretraining CLIP model for classification tasks.
We employ a light-weight and efficient feature attention module for CLIP that selects suitable features for each client's data.
We propose a domain adaptation technique to reduce differences in data distribution between clients.
arXiv Detail & Related papers (2024-10-08T13:24:10Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - PFL-GAN: When Client Heterogeneity Meets Generative Models in
Personalized Federated Learning [55.930403371398114]
We propose a novel generative adversarial network (GAN) sharing and aggregation strategy for personalized learning (PFL)
PFL-GAN addresses the client heterogeneity in different scenarios. More specially, we first learn the similarity among clients and then develop an weighted collaborative data aggregation.
The empirical results through the rigorous experimentation on several well-known datasets demonstrate the effectiveness of PFL-GAN.
arXiv Detail & Related papers (2023-08-23T22:38:35Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - FedSoup: Improving Generalization and Personalization in Federated
Learning via Selective Model Interpolation [32.36334319329364]
Cross-silo federated learning (FL) enables the development of machine learning models on datasets distributed across data centers.
Recent research has found that current FL algorithms face a trade-off between local and global performance when confronted with distribution shifts.
We propose a novel federated model soup method to optimize the trade-off between local and global performance.
arXiv Detail & Related papers (2023-07-20T00:07:29Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv Detail & Related papers (2022-07-20T04:55:18Z) - IOP-FL: Inside-Outside Personalization for Federated Medical Image
Segmentation [18.65229252289727]
Federated learning allows multiple medical institutions to collaboratively learn a global model without centralizing client data.
We propose a novel unified framework for both textitInside and Outside model Personalization in FL (IOP-FL)
Our experimental results on two medical image segmentation tasks present significant improvements over SOTA methods on both inside and outside personalization.
arXiv Detail & Related papers (2022-04-16T08:26:19Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - Towards Understanding Quality Challenges of the Federated Learning: A
First Look from the Lens of Robustness [4.822471415125479]
Federated learning (FL) aims to preserve users' data privacy while leveraging the entire dataset of all participants for training.
FL still tends to suffer from quality issues such as attacks or byzantine faults.
This paper investigates the effectiveness of state-of-the-art (SOTA) robust FL techniques in the presence of attacks and faults.
arXiv Detail & Related papers (2022-01-05T02:06:39Z) - Auto-FedAvg: Learnable Federated Averaging for Multi-Institutional
Medical Image Segmentation [7.009650174262515]
Federated learning (FL) enables collaborative model training while preserving each participant's privacy.
FedAvg is a standard algorithm that uses fixed weights, often originating from the dataset sizes at each client, to aggregate the distributed learned models on a server during the FL process.
In this work, we design a new data-driven approach, namely Auto-FedAvg, where aggregation weights are dynamically adjusted.
arXiv Detail & Related papers (2021-04-20T18:29:44Z) - FLOP: Federated Learning on Medical Datasets using Partial Networks [84.54663831520853]
COVID-19 Disease due to the novel coronavirus has caused a shortage of medical resources.
Different data-driven deep learning models have been developed to mitigate the diagnosis of COVID-19.
The data itself is still scarce due to patient privacy concerns.
We propose a simple yet effective algorithm, named textbfFederated textbfL textbfon Medical datasets using textbfPartial Networks (FLOP)
arXiv Detail & Related papers (2021-02-10T01:56:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.