Distributed Contrastive Learning for Medical Image Segmentation
- URL: http://arxiv.org/abs/2208.03808v1
- Date: Sun, 7 Aug 2022 20:47:05 GMT
- Title: Distributed Contrastive Learning for Medical Image Segmentation
- Authors: Yawen Wu, Dewen Zeng, Zhepeng Wang, Yiyu Shi, Jingtong Hu
- Abstract summary: Supervised deep learning needs a large amount of labeled data to achieve high performance.
In medical imaging analysis, each site may only have a limited amount of data and labels, which makes learning ineffective.
We propose two federated self-supervised learning frameworks for medical image segmentation with limited annotations.
- Score: 16.3860181959878
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Supervised deep learning needs a large amount of labeled data to achieve high
performance. However, in medical imaging analysis, each site may only have a
limited amount of data and labels, which makes learning ineffective. Federated
learning (FL) can learn a shared model from decentralized data. But traditional
FL requires fully-labeled data for training, which is very expensive to obtain.
Self-supervised contrastive learning (CL) can learn from unlabeled data for
pre-training, followed by fine-tuning with limited annotations. However, when
adopting CL in FL, the limited data diversity on each site makes federated
contrastive learning (FCL) ineffective. In this work, we propose two federated
self-supervised learning frameworks for volumetric medical image segmentation
with limited annotations. The first one features high accuracy and fits
high-performance servers with high-speed connections. The second one features
lower communication costs, suitable for mobile devices. In the first framework,
features are exchanged during FCL to provide diverse contrastive data to each
site for effective local CL while keeping raw data private. Global structural
matching aligns local and remote features for a unified feature space among
different sites. In the second framework, to reduce the communication cost for
feature exchanging, we propose an optimized method FCLOpt that does not rely on
negative samples. To reduce the communications of model download, we propose
the predictive target network update (PTNU) that predicts the parameters of the
target network. Based on PTNU, we propose the distance prediction (DP) to
remove most of the uploads of the target network. Experiments on a cardiac MRI
dataset show the proposed two frameworks substantially improve the segmentation
and generalization performance compared with state-of-the-art techniques.
Related papers
- Communication-Efficient Hybrid Federated Learning for E-health with Horizontal and Vertical Data Partitioning [67.49221252724229]
E-health allows smart devices and medical institutions to collaboratively collect patients' data, which is trained by Artificial Intelligence (AI) technologies to help doctors make diagnosis.
Applying federated learning in e-health faces many challenges.
Medical data is both horizontally and vertically partitioned.
A naive combination of HFL and VFL has limitations including low training efficiency, unsound convergence analysis, and lack of parameter tuning strategies.
arXiv Detail & Related papers (2024-04-15T19:45:07Z) - Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Unifying and Personalizing Weakly-supervised Federated Medical Image
Segmentation via Adaptive Representation and Aggregation [1.121358474059223]
Federated learning (FL) enables multiple sites to collaboratively train powerful deep models without compromising data privacy and security.
Weakly supervised segmentation, which uses sparsely-grained supervision, is increasingly being paid attention to due to its great potential of reducing annotation costs.
We propose a novel personalized FL framework for medical image segmentation, named FedICRA, which uniformly leverages heterogeneous weak supervision.
arXiv Detail & Related papers (2023-04-12T06:32:08Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - Federated Self-Supervised Contrastive Learning and Masked Autoencoder
for Dermatological Disease Diagnosis [15.20791611477636]
In dermatological disease diagnosis, the private data collected by mobile dermatology assistants exist on distributed mobile devices of patients.
We propose two federated self-supervised learning frameworks for dermatological disease diagnosis with limited labels.
Experiments on dermatological disease datasets show superior accuracy of the proposed frameworks over state-of-the-arts.
arXiv Detail & Related papers (2022-08-24T02:49:35Z) - Personalizing Federated Medical Image Segmentation via Local Calibration [9.171482226385551]
Using a single model to adapt to various data distributions from different sites is extremely challenging.
We propose a personalized federated framework with textbfLocal textbfCalibration (LC-Fed) to leverage the inter-site in-consistencies.
Our method consistently shows superior performance to the state-of-the-art personalized FL methods.
arXiv Detail & Related papers (2022-07-11T06:30:31Z) - Federated Contrastive Learning for Volumetric Medical Image Segmentation [16.3860181959878]
Federated learning (FL) can help in this regard by learning a shared model while keeping training data local for privacy.
Traditional FL requires fully-labeled data for training, which is inconvenient or sometimes infeasible to obtain.
In this work, we propose an FCL framework for volumetric medical image segmentation with limited annotations.
arXiv Detail & Related papers (2022-04-23T03:47:23Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Communication-Efficient Hierarchical Federated Learning for IoT
Heterogeneous Systems with Imbalanced Data [42.26599494940002]
Federated learning (FL) is a distributed learning methodology that allows multiple nodes to cooperatively train a deep learning model.
This paper studies the potential of hierarchical FL in IoT heterogeneous systems.
It proposes an optimized solution for user assignment and resource allocation on multiple edge nodes.
arXiv Detail & Related papers (2021-07-14T08:32:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.