FedMix: Mixed Supervised Federated Learning for Medical Image
Segmentation
- URL: http://arxiv.org/abs/2205.01840v1
- Date: Wed, 4 May 2022 01:17:53 GMT
- Title: FedMix: Mixed Supervised Federated Learning for Medical Image
Segmentation
- Authors: Jeffry Wicaksana, Zengqiang Yan, Dong Zhang, Xijie Huang, Huimin Wu,
Xin Yang, and Kwang-Ting Cheng
- Abstract summary: We propose a label-agnostic unified federated learning framework, named FedMix, for medical image segmentation based on mixed image labels.
In FedMix, each client updates the federated model by integrating and effectively making use of all available labeled data.
Compared to the existing methods, FedMix not only breaks through the constraint of a single level of image supervision, but also can dynamically adjust the aggregation weight of each local client.
- Score: 29.728635583886046
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The purpose of federated learning is to enable multiple clients to jointly
train a machine learning model without sharing data. However, the existing
methods for training an image segmentation model have been based on an
unrealistic assumption that the training set for each local client is annotated
in a similar fashion and thus follows the same image supervision level. To
relax this assumption, in this work, we propose a label-agnostic unified
federated learning framework, named FedMix, for medical image segmentation
based on mixed image labels. In FedMix, each client updates the federated model
by integrating and effectively making use of all available labeled data ranging
from strong pixel-level labels, weak bounding box labels, to weakest
image-level class labels. Based on these local models, we further propose an
adaptive weight assignment procedure across local clients, where each client
learns an aggregation weight during the global model update. Compared to the
existing methods, FedMix not only breaks through the constraint of a single
level of image supervision, but also can dynamically adjust the aggregation
weight of each local client, achieving rich yet discriminative feature
representations. To evaluate its effectiveness, experiments have been carried
out on two challenging medical image segmentation tasks, i.e., breast tumor
segmentation and skin lesion segmentation. The results validate that our
proposed FedMix outperforms the state-of-the-art method by a large margin.
Related papers
- Cross-model Mutual Learning for Exemplar-based Medical Image Segmentation [25.874281336821685]
Cross-model Mutual learning framework for Exemplar-based Medical image (CMEMS)
We introduce a novel Cross-model Mutual learning framework for Exemplar-based Medical image (CMEMS)
arXiv Detail & Related papers (2024-04-18T00:18:07Z) - Federated Semi-supervised Learning for Medical Image Segmentation with intra-client and inter-client Consistency [10.16245019262119]
Federated learning aims to train a shared model of isolated clients without local data exchange.
In this work, we propose a novel federated semi-supervised learning framework for medical image segmentation.
arXiv Detail & Related papers (2024-03-19T12:52:38Z) - Rethinking Semi-Supervised Federated Learning: How to co-train
fully-labeled and fully-unlabeled client imaging data [6.322831694506287]
Isolated Federated Learning (IsoFed) is a learning scheme specifically designed for semi-supervised federated learning (SSFL)
We propose a novel learning scheme specifically designed for SSFL that circumvents the problem by avoiding simple averaging of supervised and semi-supervised models together.
In particular, our training approach consists of two parts - (a) isolated aggregation of labeled and unlabeled client models, and (b) local self-supervised pretraining of isolated global models in all clients.
arXiv Detail & Related papers (2023-10-28T20:41:41Z) - Scale Federated Learning for Label Set Mismatch in Medical Image
Classification [4.344828846048128]
Federated learning (FL) has been introduced to the healthcare domain as a decentralized learning paradigm.
Most previous studies have assumed that every client holds an identical label set.
We propose the framework FedLSM to solve the problem of Label Set Mismatch.
arXiv Detail & Related papers (2023-04-14T05:32:01Z) - CellMix: A General Instance Relationship based Method for Data
Augmentation Towards Pathology Image Classification [6.9596321268519326]
In pathology image analysis, obtaining and maintaining high-quality annotated samples is an extremely labor-intensive task.
We propose the CellMix framework, which employs a novel distribution-oriented in-place shuffle approach.
Our experiments in pathology image classification tasks demonstrate state-of-the-art (SOTA) performance on 7 distinct datasets.
arXiv Detail & Related papers (2023-01-27T03:17:35Z) - Dynamic Bank Learning for Semi-supervised Federated Image Diagnosis with
Class Imbalance [65.61909544178603]
We study a practical yet challenging problem of class imbalanced semi-supervised FL (imFed-Semi)
This imFed-Semi problem is addressed by a novel dynamic bank learning scheme, which improves client training by exploiting class proportion information.
We evaluate our approach on two public real-world medical datasets, including the intracranial hemorrhage diagnosis with 25,000 CT slices and skin lesion diagnosis with 10,015 dermoscopy images.
arXiv Detail & Related papers (2022-06-27T06:51:48Z) - FedNoiL: A Simple Two-Level Sampling Method for Federated Learning with
Noisy Labels [49.47228898303909]
Federated learning (FL) aims at training a global model on the server side while the training data are collected and located at the local devices.
Local training on noisy labels can easily result in overfitting to noisy labels, which is devastating to the global model through aggregation.
We develop a simple two-level sampling method "FedNoiL" that selects clients for more robust global aggregation on the server.
arXiv Detail & Related papers (2022-05-20T12:06:39Z) - Robust Medical Image Classification from Noisy Labeled Data with Global
and Local Representation Guided Co-training [73.60883490436956]
We propose a novel collaborative training paradigm with global and local representation learning for robust medical image classification.
We employ the self-ensemble model with a noisy label filter to efficiently select the clean and noisy samples.
We also design a novel global and local representation learning scheme to implicitly regularize the networks to utilize noisy samples.
arXiv Detail & Related papers (2022-05-10T07:50:08Z) - Federated Semi-supervised Medical Image Classification via Inter-client
Relation Matching [58.26619456972598]
Federated learning (FL) has emerged with increasing popularity to collaborate distributed medical institutions for training deep networks.
This paper studies a practical yet challenging FL problem, named textitFederated Semi-supervised Learning (FSSL)
We present a novel approach for this problem, which improves over traditional consistency regularization mechanism with a new inter-client relation matching scheme.
arXiv Detail & Related papers (2021-06-16T07:58:00Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Un-Mix: Rethinking Image Mixtures for Unsupervised Visual Representation
Learning [108.999497144296]
Recently advanced unsupervised learning approaches use the siamese-like framework to compare two "views" from the same image for learning representations.
This work aims to involve the distance concept on label space in the unsupervised learning and let the model be aware of the soft degree of similarity between positive or negative pairs.
Despite its conceptual simplicity, we show empirically that with the solution -- Unsupervised image mixtures (Un-Mix), we can learn subtler, more robust and generalized representations from the transformed input and corresponding new label space.
arXiv Detail & Related papers (2020-03-11T17:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.