Hybrid Attention for Automatic Segmentation of Whole Fetal Head in
Prenatal Ultrasound Volumes
- URL: http://arxiv.org/abs/2004.13567v1
- Date: Tue, 28 Apr 2020 14:43:05 GMT
- Title: Hybrid Attention for Automatic Segmentation of Whole Fetal Head in
Prenatal Ultrasound Volumes
- Authors: Xin Yang, Xu Wang, Yi Wang, Haoran Dou, Shengli Li, Huaxuan Wen, Yi
Lin, Pheng-Ann Heng, Dong Ni
- Abstract summary: We propose the first fully-automated solution to segment the whole fetal head in US volumes.
The segmentation task is firstly formulated as an end-to-end volumetric mapping under an encoder-decoder deep architecture.
We then combine the segmentor with a proposed hybrid attention scheme (HAS) to select discriminative features and suppress the non-informative volumetric features.
- Score: 52.53375964591765
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Background and Objective: Biometric measurements of fetal head are important
indicators for maternal and fetal health monitoring during pregnancy. 3D
ultrasound (US) has unique advantages over 2D scan in covering the whole fetal
head and may promote the diagnoses. However, automatically segmenting the whole
fetal head in US volumes still pends as an emerging and unsolved problem. The
challenges that automated solutions need to tackle include the poor image
quality, boundary ambiguity, long-span occlusion, and the appearance
variability across different fetal poses and gestational ages. In this paper,
we propose the first fully-automated solution to segment the whole fetal head
in US volumes.
Methods: The segmentation task is firstly formulated as an end-to-end
volumetric mapping under an encoder-decoder deep architecture. We then combine
the segmentor with a proposed hybrid attention scheme (HAS) to select
discriminative features and suppress the non-informative volumetric features in
a composite and hierarchical way. With little computation overhead, HAS proves
to be effective in addressing boundary ambiguity and deficiency. To enhance the
spatial consistency in segmentation, we further organize multiple segmentors in
a cascaded fashion to refine the results by revisiting context in the
prediction of predecessors.
Results: Validated on a large dataset collected from 100 healthy volunteers,
our method presents superior segmentation performance (DSC (Dice Similarity
Coefficient), 96.05%), remarkable agreements with experts. With another 156
volumes collected from 52 volunteers, we ahieve high reproducibilities (mean
standard deviation 11.524 mL) against scan variations.
Conclusion: This is the first investigation about whole fetal head
segmentation in 3D US. Our method is promising to be a feasible solution in
assisting the volumetric US-based prenatal studies.
Related papers
- PSFHS Challenge Report: Pubic Symphysis and Fetal Head Segmentation from Intrapartum Ultrasound Images [20.956972919840293]
The Grand Challenge on Pubic Symphysis-Fetal Head (PSFHS) was held alongside the 26th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2023)
This challenge aimed to enhance the development of automatic segmentation algorithms at an international scale, providing the largest dataset to date with 5,101 intrapartum ultrasound images.
The algorithms have elevated the state-of-the-art in automatic PSFHS from intrapartum ultrasound images.
arXiv Detail & Related papers (2024-09-17T08:24:34Z) - Towards a Benchmark for Colorectal Cancer Segmentation in Endorectal Ultrasound Videos: Dataset and Model Development [59.74920439478643]
In this paper, we collect and annotated the first benchmark dataset that covers diverse ERUS scenarios.
Our ERUS-10K dataset comprises 77 videos and 10,000 high-resolution annotated frames.
We introduce a benchmark model for colorectal cancer segmentation, named the Adaptive Sparse-context TRansformer (ASTR)
arXiv Detail & Related papers (2024-08-19T15:04:42Z) - Enhancing Generalized Fetal Brain MRI Segmentation using A Cascade Network with Depth-wise Separable Convolution and Attention Mechanism [2.2252684361733293]
We propose a novel cascade network called CasUNext to enhance the accuracy and generalization of fetal brain MRI segmentation.
We evaluate CasUNext on 150 fetal MRI scans between 20 to 36 weeks from two scanners made by Philips and Siemens.
Results demonstrate that CasUNext achieves improved segmentation performance compared to U-Nets and other state-of-the-art approaches.
arXiv Detail & Related papers (2024-05-24T04:23:22Z) - Multi-Task Learning Approach for Unified Biometric Estimation from Fetal
Ultrasound Anomaly Scans [0.8213829427624407]
We propose a multi-task learning approach to classify the region into head, abdomen and femur.
We were able to achieve a mean absolute error (MAE) of 1.08 mm on head circumference, 1.44 mm on abdomen circumference and 1.10 mm on femur length with a classification accuracy of 99.91%.
arXiv Detail & Related papers (2023-11-16T06:35:02Z) - FetusMapV2: Enhanced Fetal Pose Estimation in 3D Ultrasound [28.408626329596668]
We propose a novel 3D fetal pose estimation framework (called FetusMapV2) to overcome the above challenges.
First, we propose a scheme that explores the complementary network structure-unconstrained and activation-unreserved GPU memory management approaches.
Second, we design a novel Pair Loss to mitigate confusion caused by symmetrical and similar anatomical structures.
Third, we propose a shape priors-based self-supervised learning by selecting the relatively stable landmarks to refine the pose online.
arXiv Detail & Related papers (2023-10-30T06:18:47Z) - Tissue Segmentation of Thick-Slice Fetal Brain MR Scans with Guidance
from High-Quality Isotropic Volumes [52.242103848335354]
We propose a novel Cycle-Consistent Domain Adaptation Network (C2DA-Net) to efficiently transfer the knowledge learned from high-quality isotropic volumes for accurate tissue segmentation of thick-slice scans.
Our C2DA-Net can fully utilize a small set of annotated isotropic volumes to guide tissue segmentation on unannotated thick-slice scans.
arXiv Detail & Related papers (2023-08-13T12:51:15Z) - Localizing Scan Targets from Human Pose for Autonomous Lung Ultrasound
Imaging [61.60067283680348]
With the advent of COVID-19 global pandemic, there is a need to fully automate ultrasound imaging.
We propose a vision-based, data driven method that incorporates learning-based computer vision techniques.
Our method attains an accuracy level of 15.52 (9.47) mm for probe positioning and 4.32 (3.69)deg for probe orientation, with a success rate above 80% under an error threshold of 25mm for all scan targets.
arXiv Detail & Related papers (2022-12-15T14:34:12Z) - Towards A Device-Independent Deep Learning Approach for the Automated
Segmentation of Sonographic Fetal Brain Structures: A Multi-Center and
Multi-Device Validation [0.0]
We propose a DL based segmentation framework for the automated segmentation of 10 key fetal brain structures from 2 axial planes from fetal brain USG images (2D)
The proposed DL system offered a promising and generalizable performance (multi-centers, multi-device) and also presents evidence in support of device-induced variation in image quality.
arXiv Detail & Related papers (2022-02-28T05:42:03Z) - Robust Medical Instrument Segmentation Challenge 2019 [56.148440125599905]
Intraoperative tracking of laparoscopic instruments is often a prerequisite for computer and robotic-assisted interventions.
Our challenge was based on a surgical data set comprising 10,040 annotated images acquired from a total of 30 surgical procedures.
The results confirm the initial hypothesis, namely that algorithm performance degrades with an increasing domain gap.
arXiv Detail & Related papers (2020-03-23T14:35:08Z) - VerSe: A Vertebrae Labelling and Segmentation Benchmark for
Multi-detector CT Images [121.31355003451152]
Large Scale Vertebrae Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020.
We present the the results of this evaluation and further investigate the performance-variation at vertebra-level, scan-level, and at different fields-of-view.
arXiv Detail & Related papers (2020-01-24T21:09:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.