Optimizing Federated Learning for Medical Image Classification on
Distributed Non-iid Datasets with Partial Labels
- URL: http://arxiv.org/abs/2303.06180v1
- Date: Fri, 10 Mar 2023 19:23:33 GMT
- Title: Optimizing Federated Learning for Medical Image Classification on
Distributed Non-iid Datasets with Partial Labels
- Authors: Pranav Kulkarni, Adway Kanhere, Paul H. Yi, Vishwa S. Parekh
- Abstract summary: FedFBN is a federated learning framework that draws inspiration from transfer learning by using pretrained networks as the model backend.
We evaluate FedFBN with current FL strategies using synthetic iid toy datasets and large-scale non-iid datasets across scenarios with partial and complete labels.
- Score: 3.6704226968275258
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Numerous large-scale chest x-ray datasets have spearheaded expert-level
detection of abnormalities using deep learning. However, these datasets focus
on detecting a subset of disease labels that could be present, thus making them
distributed and non-iid with partial labels. Recent literature has indicated
the impact of batch normalization layers on the convergence of federated
learning due to domain shift associated with non-iid data with partial labels.
To that end, we propose FedFBN, a federated learning framework that draws
inspiration from transfer learning by using pretrained networks as the model
backend and freezing the batch normalization layers throughout the training
process. We evaluate FedFBN with current FL strategies using synthetic iid toy
datasets and large-scale non-iid datasets across scenarios with partial and
complete labels. Our results demonstrate that FedFBN outperforms current
aggregation strategies for training global models using distributed and non-iid
data with partial labels.
Related papers
- Co-Training with Active Contrastive Learning and Meta-Pseudo-Labeling on 2D Projections for Deep Semi-Supervised Learning [42.56511266791916]
SSL tackles this challenge by capitalizing on scarce labeled and abundant unlabeled data.
We present active-DeepFA, a method that effectively combines CL, teacher-student-based meta-pseudo-labeling and AL.
arXiv Detail & Related papers (2025-04-25T19:41:45Z) - Diff-CL: A Novel Cross Pseudo-Supervision Method for Semi-supervised Medical Image Segmentation [9.264789041589209]
This work introduces a semi-supervised medical image segmentation framework from the distribution perspective (Diff-CL)
We propose a cross-pseudo-supervision learning mechanism between diffusion and convolution segmentation networks.
Our method achieves state-of-the-art (SOTA) performance across three datasets, including left atrium, brain tumor, and NIH pancreas datasets.
arXiv Detail & Related papers (2025-03-12T13:59:09Z) - Continuous Contrastive Learning for Long-Tailed Semi-Supervised Recognition [50.61991746981703]
Current state-of-the-art LTSSL approaches rely on high-quality pseudo-labels for large-scale unlabeled data.
This paper introduces a novel probabilistic framework that unifies various recent proposals in long-tail learning.
We introduce a continuous contrastive learning method, CCL, extending our framework to unlabeled data using reliable and smoothed pseudo-labels.
arXiv Detail & Related papers (2024-10-08T15:06:10Z) - Federated Impression for Learning with Distributed Heterogeneous Data [19.50235109938016]
Federated learning (FL) provides a paradigm that can learn from distributed datasets across clients without requiring them to share data.
In FL, sub-optimal convergence is common among data from different health centers due to the variety in data collection protocols and patient demographics across centers.
We propose FedImpres which alleviates catastrophic forgetting by restoring synthetic data that represents the global information as federated impression.
arXiv Detail & Related papers (2024-09-11T15:37:52Z) - Exploiting Label Skews in Federated Learning with Model Concatenation [39.38427550571378]
Federated Learning (FL) has emerged as a promising solution to perform deep learning on different data owners without exchanging raw data.
Among different non-IID types, label skews have been challenging and common in image classification and other tasks.
We propose FedConcat, a simple and effective approach that degrades these local models as the base of the global model.
arXiv Detail & Related papers (2023-12-11T10:44:52Z) - All Points Matter: Entropy-Regularized Distribution Alignment for
Weakly-supervised 3D Segmentation [67.30502812804271]
Pseudo-labels are widely employed in weakly supervised 3D segmentation tasks where only sparse ground-truth labels are available for learning.
We propose a novel learning strategy to regularize the generated pseudo-labels and effectively narrow the gaps between pseudo-labels and model predictions.
arXiv Detail & Related papers (2023-05-25T08:19:31Z) - FedMT: Federated Learning with Mixed-type Labels [41.272679061636794]
In federated learning (FL), classifiers are trained on datasets from multiple data centers without exchanging data across them.
This limitation becomes particularly notable in domains like disease diagnosis, where different clinical centers may adhere to different standards.
This paper addresses this important yet under-explored setting of FL, namely FL with mixed-type labels.
We introduce a model-agnostic approach called FedMT, which estimates label space correspondences and projects classification scores to construct loss functions.
arXiv Detail & Related papers (2022-10-05T06:27:25Z) - Rethinking Data Heterogeneity in Federated Learning: Introducing a New
Notion and Standard Benchmarks [65.34113135080105]
We show that not only the issue of data heterogeneity in current setups is not necessarily a problem but also in fact it can be beneficial for the FL participants.
Our observations are intuitive.
Our code is available at https://github.com/MMorafah/FL-SC-NIID.
arXiv Detail & Related papers (2022-09-30T17:15:19Z) - Incorporating Semi-Supervised and Positive-Unlabeled Learning for
Boosting Full Reference Image Quality Assessment [73.61888777504377]
Full-reference (FR) image quality assessment (IQA) evaluates the visual quality of a distorted image by measuring its perceptual difference with pristine-quality reference.
Unlabeled data can be easily collected from an image degradation or restoration process, making it encouraging to exploit unlabeled training data to boost FR-IQA performance.
In this paper, we suggest to incorporate semi-supervised and positive-unlabeled (PU) learning for exploiting unlabeled data while mitigating the adverse effect of outliers.
arXiv Detail & Related papers (2022-04-19T09:10:06Z) - Learning Semantic Segmentation from Multiple Datasets with Label Shifts [101.24334184653355]
This paper proposes UniSeg, an effective approach to automatically train models across multiple datasets with differing label spaces.
Specifically, we propose two losses that account for conflicting and co-occurring labels to achieve better generalization performance in unseen domains.
arXiv Detail & Related papers (2022-02-28T18:55:19Z) - FedSLD: Federated Learning with Shared Label Distribution for Medical
Image Classification [6.0088002781256185]
We propose Federated Learning with Shared Label Distribution (FedSLD) for classification tasks.
FedSLD adjusts the contribution of each data sample to the local objective during optimization given knowledge of the distribution.
Our results show that FedSLD achieves better convergence performance than the compared leading FL optimization algorithms.
arXiv Detail & Related papers (2021-10-15T21:38:25Z) - Learning from Partially Overlapping Labels: Image Segmentation under
Annotation Shift [68.6874404805223]
We propose several strategies for learning from partially overlapping labels in the context of abdominal organ segmentation.
We find that combining a semi-supervised approach with an adaptive cross entropy loss can successfully exploit heterogeneously annotated data.
arXiv Detail & Related papers (2021-07-13T09:22:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.