Dual-Decoder Consistency via Pseudo-Labels Guided Data Augmentation for
Semi-Supervised Medical Image Segmentation
- URL: http://arxiv.org/abs/2308.16573v3
- Date: Thu, 18 Jan 2024 09:25:19 GMT
- Title: Dual-Decoder Consistency via Pseudo-Labels Guided Data Augmentation for
Semi-Supervised Medical Image Segmentation
- Authors: Yuanbin Chen, Tao Wang, Hui Tang, Longxuan Zhao, Ruige Zong, Shun
Chen, Tao Tan, Xinlin Zhang, Tong Tong
- Abstract summary: We present a novel semi-supervised learning method, Dual-Decoder Consistency via Pseudo-Labels Guided Data Augmentation.
We use distinct decoders for student and teacher networks while maintain the same encoder.
To learn from unlabeled data, we create pseudo-labels generated by the teacher networks and augment the training data with the pseudo-labels.
- Score: 13.707121013895929
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While supervised learning has achieved remarkable success, obtaining
large-scale labeled datasets in biomedical imaging is often impractical due to
high costs and the time-consuming annotations required from radiologists.
Semi-supervised learning emerges as an effective strategy to overcome this
limitation by leveraging useful information from unlabeled datasets. In this
paper, we present a novel semi-supervised learning method, Dual-Decoder
Consistency via Pseudo-Labels Guided Data Augmentation (DCPA), for medical
image segmentation. We devise a consistency regularization to promote
consistent representations during the training process. Specifically, we use
distinct decoders for student and teacher networks while maintain the same
encoder. Moreover, to learn from unlabeled data, we create pseudo-labels
generated by the teacher networks and augment the training data with the
pseudo-labels. Both techniques contribute to enhancing the performance of the
proposed method. The method is evaluated on three representative medical image
segmentation datasets. Comprehensive comparisons with state-of-the-art
semi-supervised medical image segmentation methods were conducted under typical
scenarios, utilizing 10% and 20% labeled data, as well as in the extreme
scenario of only 5% labeled data. The experimental results consistently
demonstrate the superior performance of our method compared to other methods
across the three semi-supervised settings. The source code is publicly
available at https://github.com/BinYCn/DCPA.git.
Related papers
- GuidedNet: Semi-Supervised Multi-Organ Segmentation via Labeled Data Guide Unlabeled Data [4.775846640214768]
Semi-supervised multi-organ medical image segmentation aids physicians in improving disease diagnosis and treatment planning.
A key concept is that voxel features from labeled and unlabeled data close each other in the feature space more likely to belong to the same class.
We introduce a Knowledge Transfer Cross Pseudo-label Supervision (KT-CPS) strategy, which leverages the prior knowledge obtained from the labeled data to guide the training of the unlabeled data.
arXiv Detail & Related papers (2024-08-09T07:46:01Z) - Leveraging Fixed and Dynamic Pseudo-labels for Semi-supervised Medical Image Segmentation [7.9449756510822915]
Semi-supervised medical image segmentation has gained growing interest due to its ability to utilize unannotated data.
The current state-of-the-art methods mostly rely on pseudo-labeling within a co-training framework.
We propose a novel approach where multiple pseudo-labels for the same unannotated image are used to learn from the unlabeled data.
arXiv Detail & Related papers (2024-05-12T11:30:01Z) - CrossMatch: Enhance Semi-Supervised Medical Image Segmentation with Perturbation Strategies and Knowledge Distillation [7.6057981800052845]
CrossMatch is a novel framework that integrates knowledge distillation with dual strategies-image-level and feature-level to improve the model's learning from both labeled and unlabeled data.
Our method significantly surpasses other state-of-the-art techniques in standard benchmarks by effectively minimizing the gap between training on labeled and unlabeled data.
arXiv Detail & Related papers (2024-05-01T07:16:03Z) - Pseudo Label-Guided Data Fusion and Output Consistency for
Semi-Supervised Medical Image Segmentation [9.93871075239635]
We propose the PLGDF framework, which builds upon the mean teacher network for segmenting medical images with less annotation.
We propose a novel pseudo-label utilization scheme, which combines labeled and unlabeled data to augment the dataset effectively.
Our framework yields superior performance compared to six state-of-the-art semi-supervised learning methods.
arXiv Detail & Related papers (2023-11-17T06:36:43Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - Pseudo-label Guided Cross-video Pixel Contrast for Robotic Surgical
Scene Segmentation with Limited Annotations [72.15956198507281]
We propose PGV-CL, a novel pseudo-label guided cross-video contrast learning method to boost scene segmentation.
We extensively evaluate our method on a public robotic surgery dataset EndoVis18 and a public cataract dataset CaDIS.
arXiv Detail & Related papers (2022-07-20T05:42:19Z) - Self-Paced Contrastive Learning for Semi-supervisedMedical Image
Segmentation with Meta-labels [6.349708371894538]
We propose to adapt contrastive learning to work with meta-label annotations.
We use the meta-labels for pre-training the image encoder as well as to regularize a semi-supervised training.
Results on three different medical image segmentation datasets show that our approach highly boosts the performance of a model trained on a few scans.
arXiv Detail & Related papers (2021-07-29T04:30:46Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Dual-Teacher: Integrating Intra-domain and Inter-domain Teachers for
Annotation-efficient Cardiac Segmentation [65.81546955181781]
We propose a novel semi-supervised domain adaptation approach, namely Dual-Teacher.
The student model learns the knowledge of unlabeled target data and labeled source data by two teacher models.
We demonstrate that our approach is able to concurrently utilize unlabeled data and cross-modality data with superior performance.
arXiv Detail & Related papers (2020-07-13T10:00:44Z) - ATSO: Asynchronous Teacher-Student Optimization for Semi-Supervised
Medical Image Segmentation [99.90263375737362]
We propose ATSO, an asynchronous version of teacher-student optimization.
ATSO partitions the unlabeled data into two subsets and alternately uses one subset to fine-tune the model and updates the label on the other subset.
We evaluate ATSO on two popular medical image segmentation datasets and show its superior performance in various semi-supervised settings.
arXiv Detail & Related papers (2020-06-24T04:05:12Z) - 3D medical image segmentation with labeled and unlabeled data using
autoencoders at the example of liver segmentation in CT images [58.720142291102135]
This work investigates the potential of autoencoder-extracted features to improve segmentation with a convolutional neural network.
A convolutional autoencoder was used to extract features from unlabeled data and a multi-scale, fully convolutional CNN was used to perform the target task of 3D liver segmentation in CT images.
arXiv Detail & Related papers (2020-03-17T20:20:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.