Pelvic floor MRI segmentation based on semi-supervised deep learning
- URL: http://arxiv.org/abs/2311.03105v2
- Date: Wed, 22 Nov 2023 15:46:00 GMT
- Title: Pelvic floor MRI segmentation based on semi-supervised deep learning
- Authors: Jianwei Zuo, Fei Feng, Zhuhui Wang, James A. Ashton-Miller, John O.L.
Delancey and Jiajia Luo
- Abstract summary: Deep learning-enabled semantic segmentation has facilitated the three-dimensional geometric reconstruction of pelvic floor organs.
The task of labeling pelvic floor MRI segmentation is labor-intensive and costly, leading to a scarcity of labels.
Insufficient segmentation labels limit the precise segmentation and reconstruction of pelvic floor organs.
- Score: 3.764963091541598
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The semantic segmentation of pelvic organs via MRI has important clinical
significance. Recently, deep learning-enabled semantic segmentation has
facilitated the three-dimensional geometric reconstruction of pelvic floor
organs, providing clinicians with accurate and intuitive diagnostic results.
However, the task of labeling pelvic floor MRI segmentation, typically
performed by clinicians, is labor-intensive and costly, leading to a scarcity
of labels. Insufficient segmentation labels limit the precise segmentation and
reconstruction of pelvic floor organs. To address these issues, we propose a
semi-supervised framework for pelvic organ segmentation. The implementation of
this framework comprises two stages. In the first stage, it performs
self-supervised pre-training using image restoration tasks. Subsequently,
fine-tuning of the self-supervised model is performed, using labeled data to
train the segmentation model. In the second stage, the self-supervised
segmentation model is used to generate pseudo labels for unlabeled data.
Ultimately, both labeled and unlabeled data are utilized in semi-supervised
training. Upon evaluation, our method significantly enhances the performance in
the semantic segmentation and geometric reconstruction of pelvic organs, Dice
coefficient can increase by 2.65% averagely. Especially for organs that are
difficult to segment, such as the uterus, the accuracy of semantic segmentation
can be improved by up to 3.70%.
Related papers
- Train-Free Segmentation in MRI with Cubical Persistent Homology [0.0]
We describe a new general method for segmentation in MRI scans using Topological Data Analysis (TDA)
It works in three steps, first identifying the whole object to segment via automatic thresholding, then detecting a distinctive subset whose topology is known in advance, and finally deducing the various components of the segmentation.
We study the examples of glioblastoma segmentation in brain MRI, where a sphere is to be detected, as well as myocardium in cardiac MRI, involving a cylinder, and cortical plate detection in fetal brain MRI, whose 2D slices are circles.
arXiv Detail & Related papers (2024-01-02T11:43:49Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - Unsupervised Domain Adaptation through Shape Modeling for Medical Image
Segmentation [23.045760366698634]
We aim at modeling shape explicitly and using it to help medical image segmentation.
Previous methods proposed Variational Autoencoder (VAE) based models to learn the distribution of shape for a particular organ.
We propose a new unsupervised domain adaptation pipeline based on a pseudo loss and a VAE reconstruction loss under a teacher-student learning paradigm.
arXiv Detail & Related papers (2022-07-06T09:16:42Z) - WORD: Revisiting Organs Segmentation in the Whole Abdominal Region [14.752924082744814]
Whole abdominal organs segmentation plays an important role in abdomen lesion diagnosis, radiotherapy planning, and follow-up.
Deep learning-based medical image segmentation has shown the potential to reduce manual delineation efforts, but it still requires a large-scale fine annotated dataset for training.
In this work, we establish a large-scale textitWhole abdominal textitORgans textitDataset (textitWORD) for algorithms research and clinical applications development.
arXiv Detail & Related papers (2021-11-03T02:26:14Z) - Generalized Organ Segmentation by Imitating One-shot Reasoning using
Anatomical Correlation [55.1248480381153]
We propose OrganNet which learns a generalized organ concept from a set of annotated organ classes and then transfer this concept to unseen classes.
We show that OrganNet can effectively resist the wide variations in organ morphology and produce state-of-the-art results in one-shot segmentation task.
arXiv Detail & Related papers (2021-03-30T13:41:12Z) - Co-Generation and Segmentation for Generalized Surgical Instrument
Segmentation on Unlabelled Data [49.419268399590045]
Surgical instrument segmentation for robot-assisted surgery is needed for accurate instrument tracking and augmented reality overlays.
Deep learning-based methods have shown state-of-the-art performance for surgical instrument segmentation, but their results depend on labelled data.
In this paper, we demonstrate the limited generalizability of these methods on different datasets, including human robot-assisted surgeries.
arXiv Detail & Related papers (2021-03-16T18:41:18Z) - Three Ways to Improve Semantic Segmentation with Self-Supervised Depth
Estimation [90.87105131054419]
We present a framework for semi-supervised semantic segmentation, which is enhanced by self-supervised monocular depth estimation from unlabeled image sequences.
We validate the proposed model on the Cityscapes dataset, where all three modules demonstrate significant performance gains.
arXiv Detail & Related papers (2020-12-19T21:18:03Z) - Towards Robust Partially Supervised Multi-Structure Medical Image
Segmentation on Small-Scale Data [123.03252888189546]
We propose Vicinal Labels Under Uncertainty (VLUU) to bridge the methodological gaps in partially supervised learning (PSL) under data scarcity.
Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels.
Our research suggests a new research direction in label-efficient deep learning with partial supervision.
arXiv Detail & Related papers (2020-11-28T16:31:00Z) - Weakly-Supervised Segmentation for Disease Localization in Chest X-Ray
Images [0.0]
We propose a novel approach to the semantic segmentation of medical chest X-ray images with only image-level class labels as supervision.
We show that this approach is applicable to chest X-rays for detecting an anomalous volume of air between the lung and the chest wall.
arXiv Detail & Related papers (2020-07-01T20:48:35Z) - Uncertainty-aware multi-view co-training for semi-supervised medical
image segmentation and domain adaptation [35.33425093398756]
Unlabeled data is much easier to acquire than well-annotated data.
We propose uncertainty-aware multi-view co-training for medical image segmentation.
Our framework is capable of efficiently utilizing unlabeled data for better performance.
arXiv Detail & Related papers (2020-06-28T22:04:54Z) - 3D medical image segmentation with labeled and unlabeled data using
autoencoders at the example of liver segmentation in CT images [58.720142291102135]
This work investigates the potential of autoencoder-extracted features to improve segmentation with a convolutional neural network.
A convolutional autoencoder was used to extract features from unlabeled data and a multi-scale, fully convolutional CNN was used to perform the target task of 3D liver segmentation in CT images.
arXiv Detail & Related papers (2020-03-17T20:20:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.