Automatic 3D Ultrasound Segmentation of Uterus Using Deep Learning
- URL: http://arxiv.org/abs/2109.09283v1
- Date: Mon, 20 Sep 2021 03:13:57 GMT
- Title: Automatic 3D Ultrasound Segmentation of Uterus Using Deep Learning
- Authors: Bahareh Behboodi, Hassan Rivaz, Susan Lalondrelle, and Emma Harris
- Abstract summary: 3D ultrasound (US) can be used to image the uterus, but finding the position of uterine boundary in US images is a challenging task.
We developed 2D UNet-based networks that are trained based on two scenarios.
- Score: 4.2698418800007865
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: On-line segmentation of the uterus can aid effective image-based guidance for
precise delivery of dose to the target tissue (the uterocervix) during cervix
cancer radiotherapy. 3D ultrasound (US) can be used to image the uterus,
however, finding the position of uterine boundary in US images is a challenging
task due to large daily positional and shape changes in the uterus, large
variation in bladder filling, and the limitations of 3D US images such as low
resolution in the elevational direction and imaging aberrations. Previous
studies on uterus segmentation mainly focused on developing semi-automatic
algorithms where require manual initialization to be done by an expert
clinician. Due to limited studies on the automatic 3D uterus segmentation, the
aim of the current study was to overcome the need for manual initialization in
the semi-automatic algorithms using the recent deep learning-based algorithms.
Therefore, we developed 2D UNet-based networks that are trained based on two
scenarios. In the first scenario, we trained 3 different networks on each plane
(i.e., sagittal, coronal, axial) individually. In the second scenario, our
proposed network was trained using all the planes of each 3D volume. Our
proposed schematic can overcome the initial manual selection of previous
semi-automatic algorithm.
Related papers
- Localizing Scan Targets from Human Pose for Autonomous Lung Ultrasound
Imaging [61.60067283680348]
With the advent of COVID-19 global pandemic, there is a need to fully automate ultrasound imaging.
We propose a vision-based, data driven method that incorporates learning-based computer vision techniques.
Our method attains an accuracy level of 15.52 (9.47) mm for probe positioning and 4.32 (3.69)deg for probe orientation, with a success rate above 80% under an error threshold of 25mm for all scan targets.
arXiv Detail & Related papers (2022-12-15T14:34:12Z) - Adaptive 3D Localization of 2D Freehand Ultrasound Brain Images [18.997300579859978]
We propose AdLocUI, a framework that Adaptively Localizes 2D Ultrasound Images in the 3D anatomical atlas.
We first train a convolutional neural network with 2D slices sampled from co-aligned 3D ultrasound volumes to predict their locations.
We fine-tune it with 2D freehand ultrasound images using a novel unsupervised cycle consistency.
arXiv Detail & Related papers (2022-09-12T17:59:41Z) - Slice-level Detection of Intracranial Hemorrhage on CT Using Deep
Descriptors of Adjacent Slices [0.31317409221921133]
We propose a new strategy to train emphslice-level classifiers on CT scans based on the descriptors of the adjacent slices along the axis.
We obtain a single model in the top 4% best-performing solutions of the RSNA Intracranial Hemorrhage dataset challenge.
The proposed method is general and can be applied to other 3D medical diagnosis tasks such as MRI imaging.
arXiv Detail & Related papers (2022-08-05T23:20:37Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - A unified 3D framework for Organs at Risk Localization and Segmentation
for Radiation Therapy Planning [56.52933974838905]
Current medical workflow requires manual delineation of organs-at-risk (OAR)
In this work, we aim to introduce a unified 3D pipeline for OAR localization-segmentation.
Our proposed framework fully enables the exploitation of 3D context information inherent in medical imaging.
arXiv Detail & Related papers (2022-03-01T17:08:41Z) - Self-Supervised Multi-Modal Alignment for Whole Body Medical Imaging [70.52819168140113]
We use a dataset of over 20,000 subjects from the UK Biobank with both whole body Dixon technique magnetic resonance (MR) scans and also dual-energy x-ray absorptiometry (DXA) scans.
We introduce a multi-modal image-matching contrastive framework, that is able to learn to match different-modality scans of the same subject with high accuracy.
Without any adaption, we show that the correspondences learnt during this contrastive training step can be used to perform automatic cross-modal scan registration.
arXiv Detail & Related papers (2021-07-14T12:35:05Z) - Planar 3D Transfer Learning for End to End Unimodal MRI Unbalanced Data
Segmentation [0.0]
We present a novel approach of 2D to 3D transfer learning based on mapping pre-trained 2D convolutional neural network weights into planar 3D kernels.
The method is validated by the proposed planar 3D res-u-net network with encoder transferred from the 2D VGG-16.
arXiv Detail & Related papers (2020-11-23T17:11:50Z) - Deep Q-Network-Driven Catheter Segmentation in 3D US by Hybrid
Constrained Semi-Supervised Learning and Dual-UNet [74.22397862400177]
We propose a novel catheter segmentation approach, which requests fewer annotations than the supervised learning method.
Our scheme considers a deep Q learning as the pre-localization step, which avoids voxel-level annotation.
With the detected catheter, patch-based Dual-UNet is applied to segment the catheter in 3D volumetric data.
arXiv Detail & Related papers (2020-06-25T21:10:04Z) - FetusMap: Fetal Pose Estimation in 3D Ultrasound [42.59502360552173]
We propose to estimate the 3D pose of fetus in US volumes to facilitate its quantitative analyses.
This is the first work about 3D pose estimation of fetus in the literature.
We propose a self-supervised learning (SSL) framework to finetune the deep network to form visually plausible pose predictions.
arXiv Detail & Related papers (2019-10-11T01:45:09Z) - Deep Attentive Features for Prostate Segmentation in 3D Transrectal
Ultrasound [59.105304755899034]
This paper develops a novel 3D deep neural network equipped with attention modules for better prostate segmentation in transrectal ultrasound (TRUS) images.
Our attention module utilizes the attention mechanism to selectively leverage the multilevel features integrated from different layers.
Experimental results on challenging 3D TRUS volumes show that our method attains satisfactory segmentation performance.
arXiv Detail & Related papers (2019-07-03T05:21:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.