Uncertainty-Guided Mutual Consistency Learning for Semi-Supervised
Medical Image Segmentation
- URL: http://arxiv.org/abs/2112.02508v1
- Date: Sun, 5 Dec 2021 08:19:41 GMT
- Title: Uncertainty-Guided Mutual Consistency Learning for Semi-Supervised
Medical Image Segmentation
- Authors: Yichi Zhang, Qingcheng Liao, Rushi Jiao, Jicong Zhang
- Abstract summary: We propose a novel uncertainty-guided mutual consistency learning framework for medical image segmentation.
It integrates intra-task consistency learning from up-to-date predictions for self-ensembling and cross-task consistency learning from task-level regularization to exploit geometric shape information.
Our method achieves performance gains by leveraging unlabeled data and outperforms existing semi-supervised segmentation methods.
- Score: 9.745971699005857
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Medical image segmentation is a fundamental and critical step in many
clinical approaches. Semi-supervised learning has been widely applied to
medical image segmentation tasks since it alleviates the heavy burden of
acquiring expert-examined annotations and takes the advantage of unlabeled data
which is much easier to acquire. Although consistency learning has been proven
to be an effective approach by enforcing an invariance of predictions under
different distributions, existing approaches cannot make full use of
region-level shape constraint and boundary-level distance information from
unlabeled data. In this paper, we propose a novel uncertainty-guided mutual
consistency learning framework to effectively exploit unlabeled data by
integrating intra-task consistency learning from up-to-date predictions for
self-ensembling and cross-task consistency learning from task-level
regularization to exploit geometric shape information. The framework is guided
by the estimated segmentation uncertainty of models to select out relatively
certain predictions for consistency learning, so as to effectively exploit more
reliable information from unlabeled data. We extensively validate our proposed
method on two publicly available benchmark datasets: Left Atrium Segmentation
(LA) dataset and Brain Tumor Segmentation (BraTS) dataset. Experimental results
demonstrate that our method achieves performance gains by leveraging unlabeled
data and outperforms existing semi-supervised segmentation methods.
Related papers
- CrossMatch: Enhance Semi-Supervised Medical Image Segmentation with Perturbation Strategies and Knowledge Distillation [7.6057981800052845]
CrossMatch is a novel framework that integrates knowledge distillation with dual strategies-image-level and feature-level to improve the model's learning from both labeled and unlabeled data.
Our method significantly surpasses other state-of-the-art techniques in standard benchmarks by effectively minimizing the gap between training on labeled and unlabeled data.
arXiv Detail & Related papers (2024-05-01T07:16:03Z) - Self-aware and Cross-sample Prototypical Learning for Semi-supervised
Medical Image Segmentation [10.18427897663732]
Consistency learning plays a crucial role in semi-supervised medical image segmentation.
It enables the effective utilization of limited annotated data while leveraging the abundance of unannotated data.
We propose a self-aware and cross-sample prototypical learning method ( SCP-Net) to enhance the diversity of prediction in consistency learning.
arXiv Detail & Related papers (2023-05-25T16:22:04Z) - Adaptive Negative Evidential Deep Learning for Open-set Semi-supervised Learning [69.81438976273866]
Open-set semi-supervised learning (Open-set SSL) considers a more practical scenario, where unlabeled data and test data contain new categories (outliers) not observed in labeled data (inliers)
We introduce evidential deep learning (EDL) as an outlier detector to quantify different types of uncertainty, and design different uncertainty metrics for self-training and inference.
We propose a novel adaptive negative optimization strategy, making EDL more tailored to the unlabeled dataset containing both inliers and outliers.
arXiv Detail & Related papers (2023-03-21T09:07:15Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - Boundary-aware Information Maximization for Self-supervised Medical
Image Segmentation [13.828282295918628]
We propose a novel unsupervised pre-training framework that avoids the drawback of contrastive learning.
Experimental results on two benchmark medical segmentation datasets reveal our method's effectiveness when few annotated images are available.
arXiv Detail & Related papers (2022-02-04T20:18:00Z) - Uncertainty-Aware Deep Co-training for Semi-supervised Medical Image
Segmentation [4.935055133266873]
We propose a novel uncertainty-aware scheme to make models learn regions purposefully.
Specifically, we employ Monte Carlo Sampling as an estimation method to attain an uncertainty map.
In the backward process, we joint unsupervised and supervised losses to accelerate the convergence of the network.
arXiv Detail & Related papers (2021-11-23T03:26:24Z) - Towards Robust Partially Supervised Multi-Structure Medical Image
Segmentation on Small-Scale Data [123.03252888189546]
We propose Vicinal Labels Under Uncertainty (VLUU) to bridge the methodological gaps in partially supervised learning (PSL) under data scarcity.
Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels.
Our research suggests a new research direction in label-efficient deep learning with partial supervision.
arXiv Detail & Related papers (2020-11-28T16:31:00Z) - Towards Cross-modality Medical Image Segmentation with Online Mutual
Knowledge Distillation [71.89867233426597]
In this paper, we aim to exploit the prior knowledge learned from one modality to improve the segmentation performance on another modality.
We propose a novel Mutual Knowledge Distillation scheme to thoroughly exploit the modality-shared knowledge.
Experimental results on the public multi-class cardiac segmentation data, i.e., MMWHS 2017, show that our method achieves large improvements on CT segmentation.
arXiv Detail & Related papers (2020-10-04T10:25:13Z) - Dual-Teacher: Integrating Intra-domain and Inter-domain Teachers for
Annotation-efficient Cardiac Segmentation [65.81546955181781]
We propose a novel semi-supervised domain adaptation approach, namely Dual-Teacher.
The student model learns the knowledge of unlabeled target data and labeled source data by two teacher models.
We demonstrate that our approach is able to concurrently utilize unlabeled data and cross-modality data with superior performance.
arXiv Detail & Related papers (2020-07-13T10:00:44Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.