Collaborative Learning for Annotation-Efficient Volumetric MR Image
Segmentation
- URL: http://arxiv.org/abs/2312.10978v1
- Date: Mon, 18 Dec 2023 07:02:37 GMT
- Title: Collaborative Learning for Annotation-Efficient Volumetric MR Image
Segmentation
- Authors: Yousuf Babiker M. Osman, Cheng Li, Weijian Huang, and Shanshan Wang
- Abstract summary: The aim of this study is to build a deep learning method exploring sparse annotations, namely only a single 2D slice label for each 3D training MR image.
A collaborative learning method by integrating the strengths of semi-supervised and self-supervised learning schemes was developed.
The proposed method achieved a substantial improvement in segmentation accuracy, increasing the mean B-IoU significantly by more than 10.0% for prostate segmentation.
- Score: 5.462792626065119
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Background: Deep learning has presented great potential in accurate MR image
segmentation when enough labeled data are provided for network optimization.
However, manually annotating 3D MR images is tedious and time-consuming,
requiring experts with rich domain knowledge and experience. Purpose: To build
a deep learning method exploring sparse annotations, namely only a single 2D
slice label for each 3D training MR image. Population: 3D MR images of 150
subjects from two publicly available datasets were included. Among them, 50
(1,377 image slices) are for prostate segmentation. The other 100 (8,800 image
slices) are for left atrium segmentation. Five-fold cross-validation
experiments were carried out utilizing the first dataset. For the second
dataset, 80 subjects were used for training and 20 were used for testing.
Assessment: A collaborative learning method by integrating the strengths of
semi-supervised and self-supervised learning schemes was developed. The method
was trained using labeled central slices and unlabeled non-central slices.
Segmentation performance on testing set was reported quantitatively and
qualitatively. Results: Compared to FS-LCS, MT, UA-MT, DCT-Seg, ICT, and AC-MT,
the proposed method achieved a substantial improvement in segmentation
accuracy, increasing the mean B-IoU significantly by more than 10.0% for
prostate segmentation (proposed method B-IoU: 70.3% vs. ICT B-IoU: 60.3%) and
by more than 6.0% for left atrium segmentation (proposed method B-IoU: 66.1%
vs. ICT B-IoU: 60.1%).
Related papers
- TotalSegmentator MRI: Sequence-Independent Segmentation of 59 Anatomical Structures in MR images [62.53931644063323]
In this study we extended the capabilities of TotalSegmentator to MR images.
We trained an nnU-Net segmentation algorithm on this dataset and calculated similarity coefficients (Dice) to evaluate the model's performance.
The model significantly outperformed two other publicly available segmentation models (Dice score 0.824 versus 0.762; p0.001 and 0.762 versus 0.542; p)
arXiv Detail & Related papers (2024-05-29T20:15:54Z) - Label-efficient Multi-organ Segmentation Method with Diffusion Model [6.413416851085592]
We present a label-efficient learning approach using a pre-trained diffusion model for multi-organ segmentation tasks in CT images.
Our method achieves competitive multi-organ segmentation performance compared to state-of-the-art methods on the FLARE 2022 dataset.
arXiv Detail & Related papers (2024-02-23T09:25:57Z) - OneSeg: Self-learning and One-shot Learning based Single-slice
Annotation for 3D Medical Image Segmentation [36.50258132379276]
We propose a self-learning and one-shot learning based framework for 3D medical image segmentation by annotating only one slice of each 3D image.
Our approach takes two steps: (1) self-learning of a reconstruction network to learn semantic correspondence among 2D slices within 3D images, and (2) representative selection of single slices for one-shot manual annotation.
Our new framework achieves comparable performance with less than 1% annotated data compared with fully supervised methods.
arXiv Detail & Related papers (2023-09-24T15:35:58Z) - FBA-Net: Foreground and Background Aware Contrastive Learning for
Semi-Supervised Atrium Segmentation [10.11072886547561]
We propose a contrastive learning strategy of foreground and background representations for semi-supervised 3D medical image segmentation.
Our framework has the potential to advance the field of semi-supervised 3D medical image segmentation.
arXiv Detail & Related papers (2023-06-27T04:14:50Z) - Semi-Supervised and Self-Supervised Collaborative Learning for Prostate
3D MR Image Segmentation [8.527048567343234]
Volumetric magnetic resonance (MR) image segmentation plays an important role in many clinical applications.
Deep learning (DL) has recently achieved state-of-the-art or even human-level performance on various image segmentation tasks.
In this work, we aim to train a semi-supervised and self-supervised collaborative learning framework for prostate 3D MR image segmentation.
arXiv Detail & Related papers (2022-11-16T11:40:13Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - To be Critical: Self-Calibrated Weakly Supervised Learning for Salient
Object Detection [95.21700830273221]
Weakly-supervised salient object detection (WSOD) aims to develop saliency models using image-level annotations.
We propose a self-calibrated training strategy by explicitly establishing a mutual calibration loop between pseudo labels and network predictions.
We prove that even a much smaller dataset with well-matched annotations can facilitate models to achieve better performance as well as generalizability.
arXiv Detail & Related papers (2021-09-04T02:45:22Z) - Medical Instrument Segmentation in 3D US by Hybrid Constrained
Semi-Supervised Learning [62.13520959168732]
We propose a semi-supervised learning framework for instrument segmentation in 3D US.
To achieve the SSL learning, a Dual-UNet is proposed to segment the instrument.
Our proposed method achieves Dice score of about 68.6%-69.1% and the inference time of about 1 sec. per volume.
arXiv Detail & Related papers (2021-07-30T07:59:45Z) - AIDE: Annotation-efficient deep learning for automatic medical image
segmentation [22.410878684721286]
We introduce effIcient Deep lEarning (AIDE) to handle imperfect datasets with an elaborately designed cross-model self-correcting mechanism.
AIDE consistently produces segmentation maps comparable to those generated by the fully supervised counterparts.
Such a 10-fold improvement of efficiency in utilizing experts' labels has the potential to promote a wide range of biomedical applications.
arXiv Detail & Related papers (2020-12-09T06:27:09Z) - ATSO: Asynchronous Teacher-Student Optimization for Semi-Supervised
Medical Image Segmentation [99.90263375737362]
We propose ATSO, an asynchronous version of teacher-student optimization.
ATSO partitions the unlabeled data into two subsets and alternately uses one subset to fine-tune the model and updates the label on the other subset.
We evaluate ATSO on two popular medical image segmentation datasets and show its superior performance in various semi-supervised settings.
arXiv Detail & Related papers (2020-06-24T04:05:12Z) - 3D medical image segmentation with labeled and unlabeled data using
autoencoders at the example of liver segmentation in CT images [58.720142291102135]
This work investigates the potential of autoencoder-extracted features to improve segmentation with a convolutional neural network.
A convolutional autoencoder was used to extract features from unlabeled data and a multi-scale, fully convolutional CNN was used to perform the target task of 3D liver segmentation in CT images.
arXiv Detail & Related papers (2020-03-17T20:20:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.