Co-training with High-Confidence Pseudo Labels for Semi-supervised
Medical Image Segmentation
- URL: http://arxiv.org/abs/2301.04465v3
- Date: Fri, 26 May 2023 15:14:45 GMT
- Title: Co-training with High-Confidence Pseudo Labels for Semi-supervised
Medical Image Segmentation
- Authors: Zhiqiang Shen, Peng Cao, Hua Yang, Xiaoli Liu, Jinzhu Yang, Osmar R.
Zaiane
- Abstract summary: We propose an Uncertainty-guided Collaborative Mean-Teacher (UCMT) for semi-supervised semantic segmentation with the high-confidence pseudo labels.
UCMT consists of two main components: 1) collaborative mean-teacher (CMT) for encouraging model disagreement and performing co-training between the sub-networks, and 2) uncertainty-guided region mix (UMIX) for manipulating the input images according to the uncertainty maps of CMT and facilitating CMT to produce high-confidence pseudo labels.
- Score: 27.833321555267116
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Consistency regularization and pseudo labeling-based semi-supervised methods
perform co-training using the pseudo labels from multi-view inputs. However,
such co-training models tend to converge early to a consensus, degenerating to
the self-training ones, and produce low-confidence pseudo labels from the
perturbed inputs during training. To address these issues, we propose an
Uncertainty-guided Collaborative Mean-Teacher (UCMT) for semi-supervised
semantic segmentation with the high-confidence pseudo labels. Concretely, UCMT
consists of two main components: 1) collaborative mean-teacher (CMT) for
encouraging model disagreement and performing co-training between the
sub-networks, and 2) uncertainty-guided region mix (UMIX) for manipulating the
input images according to the uncertainty maps of CMT and facilitating CMT to
produce high-confidence pseudo labels. Combining the strengths of UMIX with
CMT, UCMT can retain model disagreement and enhance the quality of pseudo
labels for the co-training segmentation. Extensive experiments on four public
medical image datasets including 2D and 3D modalities demonstrate the
superiority of UCMT over the state-of-the-art. Code is available at:
https://github.com/Senyh/UCMT.
Related papers
- PMT: Progressive Mean Teacher via Exploring Temporal Consistency for Semi-Supervised Medical Image Segmentation [51.509573838103854]
We propose a semi-supervised learning framework, termed Progressive Mean Teachers (PMT), for medical image segmentation.
Our PMT generates high-fidelity pseudo labels by learning robust and diverse features in the training process.
Experimental results on two datasets with different modalities, i.e., CT and MRI, demonstrate that our method outperforms the state-of-the-art medical image segmentation approaches.
arXiv Detail & Related papers (2024-09-08T15:02:25Z) - An Evidential-enhanced Tri-Branch Consistency Learning Method for Semi-supervised Medical Image Segmentation [8.507454166954139]
We introduce an Evidential Tri-Branch Consistency learning framework (ETC-Net) for semi-supervised medical image segmentation.
ETC-Net employs three branches: an evidential conservative branch, an evidential progressive branch, and an evidential fusion branch.
We also integrate uncertainty estimation from the evidential learning into cross-supervised training, mitigating the negative impact of erroneous supervision signals.
arXiv Detail & Related papers (2024-04-10T14:25:23Z) - Semantic Connectivity-Driven Pseudo-labeling for Cross-domain
Segmentation [89.41179071022121]
Self-training is a prevailing approach in cross-domain semantic segmentation.
We propose a novel approach called Semantic Connectivity-driven pseudo-labeling.
This approach formulates pseudo-labels at the connectivity level and thus can facilitate learning structured and low-noise semantics.
arXiv Detail & Related papers (2023-12-11T12:29:51Z) - Cross-head mutual Mean-Teaching for semi-supervised medical image
segmentation [6.738522094694818]
Semi-supervised medical image segmentation (SSMIS) has witnessed substantial advancements by leveraging limited labeled data and abundant unlabeled data.
Existing state-of-the-art (SOTA) methods encounter challenges in accurately predicting labels for the unlabeled data.
We propose a novel Cross-head mutual mean-teaching Network (CMMT-Net) incorporated strong-weak data augmentation.
arXiv Detail & Related papers (2023-10-08T09:13:04Z) - Multi-Scale Cross Contrastive Learning for Semi-Supervised Medical Image
Segmentation [14.536384387956527]
We develop a novel Multi-Scale Cross Supervised Contrastive Learning framework to segment structures in medical images.
Our approach contrasts multi-scale features based on ground-truth and cross-predicted labels, in order to extract robust feature representations.
It outperforms state-of-the-art semi-supervised methods by more than 3.0% in Dice.
arXiv Detail & Related papers (2023-06-25T16:55:32Z) - UCC: Uncertainty guided Cross-head Co-training for Semi-Supervised
Semantic Segmentation [2.6324267940354655]
We present a novel learning framework called Uncertainty guided Cross-head Co-training (UCC) for semi-supervised semantic segmentation.
Our framework introduces weak and strong augmentations within a shared encoder to achieve co-training, which naturally combines the benefits of consistency and self-training.
Our approach significantly outperforms other state-of-the-art semi-supervised semantic segmentation methods.
arXiv Detail & Related papers (2022-05-20T17:43:47Z) - Federated Semi-supervised Medical Image Classification via Inter-client
Relation Matching [58.26619456972598]
Federated learning (FL) has emerged with increasing popularity to collaborate distributed medical institutions for training deep networks.
This paper studies a practical yet challenging FL problem, named textitFederated Semi-supervised Learning (FSSL)
We present a novel approach for this problem, which improves over traditional consistency regularization mechanism with a new inter-client relation matching scheme.
arXiv Detail & Related papers (2021-06-16T07:58:00Z) - Semi-supervised Left Atrium Segmentation with Mutual Consistency
Training [60.59108570938163]
We propose a novel Mutual Consistency Network (MC-Net) for semi-supervised left atrium segmentation from 3D MR images.
Our MC-Net consists of one encoder and two slightly different decoders, and the prediction discrepancies of two decoders are transformed as an unsupervised loss.
We evaluate our MC-Net on the public Left Atrium (LA) database and it obtains impressive performance gains by exploiting the unlabeled data effectively.
arXiv Detail & Related papers (2021-03-04T09:34:32Z) - A Teacher-Student Framework for Semi-supervised Medical Image
Segmentation From Mixed Supervision [62.4773770041279]
We develop a semi-supervised learning framework based on a teacher-student fashion for organ and lesion segmentation.
We show our model is robust to the quality of bounding box and achieves comparable performance compared with full-supervised learning methods.
arXiv Detail & Related papers (2020-10-23T07:58:20Z) - DMT: Dynamic Mutual Training for Semi-Supervised Learning [69.17919491907296]
Self-training methods usually rely on single model prediction confidence to filter low-confidence pseudo labels.
We propose mutual training between two different models by a dynamically re-weighted loss function, called Dynamic Mutual Training.
Our experiments show that DMT achieves state-of-the-art performance in both image classification and semantic segmentation.
arXiv Detail & Related papers (2020-04-18T03:12:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.