Automatic 3D Multi-modal Ultrasound Segmentation of Human Placenta using
Fusion Strategies and Deep Learning
- URL: http://arxiv.org/abs/2401.09638v1
- Date: Wed, 17 Jan 2024 23:17:08 GMT
- Title: Automatic 3D Multi-modal Ultrasound Segmentation of Human Placenta using
Fusion Strategies and Deep Learning
- Authors: Sonit Singh, Gordon Stevenson, Brendan Mein, Alec Welsh and Arcot
Sowmya
- Abstract summary: We propose an automatic three-dimensional multi-modal (B-mode and power Doppler) ultrasound segmentation of the human placenta.
We collected data containing B-mode and power Doppler ultrasound scans for 400 studies.
We found that multimodal information in the form of B-mode and power Doppler scans outperform any single modality.
- Score: 11.137087573421258
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Purpose: Ultrasound is the most commonly used medical imaging modality for
diagnosis and screening in clinical practice. Due to its safety profile,
noninvasive nature and portability, ultrasound is the primary imaging modality
for fetal assessment in pregnancy. Current ultrasound processing methods are
either manual or semi-automatic and are therefore laborious, time-consuming and
prone to errors, and automation would go a long way in addressing these
challenges. Automated identification of placental changes at earlier gestation
could facilitate potential therapies for conditions such as fetal growth
restriction and pre-eclampsia that are currently detected only at late
gestational age, potentially preventing perinatal morbidity and mortality.
Methods: We propose an automatic three-dimensional multi-modal (B-mode and
power Doppler) ultrasound segmentation of the human placenta using deep
learning combined with different fusion strategies.We collected data containing
Bmode and power Doppler ultrasound scans for 400 studies.
Results: We evaluated different fusion strategies and state-of-the-art image
segmentation networks for placenta segmentation based on standard overlap- and
boundary-based metrics. We found that multimodal information in the form of
B-mode and power Doppler scans outperform any single modality. Furthermore, we
found that B-mode and power Doppler input scans fused at the data level provide
the best results with a mean Dice Similarity Coefficient (DSC) of 0.849.
Conclusion: We conclude that the multi-modal approach of combining B-mode and
power Doppler scans is effective in segmenting the placenta from 3D ultrasound
scans in a fully automated manner and is robust to quality variation of the
datasets.
Related papers
- CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - FUSC: Fetal Ultrasound Semantic Clustering of Second Trimester Scans
Using Deep Self-supervised Learning [1.0819408603463427]
More than 140M fetuses are born yearly, resulting in numerous scans.
The availability of a large volume of fetal ultrasound scans presents the opportunity to train robust machine learning models.
This study presents an unsupervised approach for automatically clustering ultrasound images into a large range of fetal views.
arXiv Detail & Related papers (2023-10-19T09:11:23Z) - AiAReSeg: Catheter Detection and Segmentation in Interventional
Ultrasound using Transformers [75.20925220246689]
endovascular surgeries are performed using the golden standard of Fluoroscopy, which uses ionising radiation to visualise catheters and vasculature.
This work proposes a solution using an adaptation of a state-of-the-art machine learning transformer architecture to detect and segment catheters in axial interventional Ultrasound image sequences.
arXiv Detail & Related papers (2023-09-25T19:34:12Z) - MUVF-YOLOX: A Multi-modal Ultrasound Video Fusion Network for Renal
Tumor Diagnosis [10.452919030855796]
We propose a novel multi-modal ultrasound video fusion network that can effectively perform multi-modal feature fusion and video classification for renal tumor diagnosis.
Experimental results on a multicenter dataset show that the proposed framework outperforms the single-modal models and the competing methods.
arXiv Detail & Related papers (2023-07-15T14:15:42Z) - Towards Realistic Ultrasound Fetal Brain Imaging Synthesis [0.7315240103690552]
There are few public ultrasound fetal imaging datasets due to insufficient amounts of clinical data, patient privacy, rare occurrence of abnormalities in general practice, and limited experts for data collection and validation.
To address such data scarcity, we proposed generative adversarial networks (GAN)-based models, diffusion-super-resolution-GAN and transformer-based-GAN, to synthesise images of fetal ultrasound brain planes from one public dataset.
arXiv Detail & Related papers (2023-04-08T07:07:20Z) - Localizing Scan Targets from Human Pose for Autonomous Lung Ultrasound
Imaging [61.60067283680348]
With the advent of COVID-19 global pandemic, there is a need to fully automate ultrasound imaging.
We propose a vision-based, data driven method that incorporates learning-based computer vision techniques.
Our method attains an accuracy level of 15.52 (9.47) mm for probe positioning and 4.32 (3.69)deg for probe orientation, with a success rate above 80% under an error threshold of 25mm for all scan targets.
arXiv Detail & Related papers (2022-12-15T14:34:12Z) - Multiple Time Series Fusion Based on LSTM An Application to CAP A Phase
Classification Using EEG [56.155331323304]
Deep learning based electroencephalogram channels' feature level fusion is carried out in this work.
Channel selection, fusion, and classification procedures were optimized by two optimization algorithms.
arXiv Detail & Related papers (2021-12-18T14:17:49Z) - Deep Learning for Ultrasound Beamforming [120.12255978513912]
Beamforming, the process of mapping received ultrasound echoes to the spatial image domain, lies at the heart of the ultrasound image formation chain.
Modern ultrasound imaging leans heavily on innovations in powerful digital receive channel processing.
Deep learning methods can play a compelling role in the digital beamforming pipeline.
arXiv Detail & Related papers (2021-09-23T15:15:21Z) - Multi-Modal Active Learning for Automatic Liver Fibrosis Diagnosis based
on Ultrasound Shear Wave Elastography [13.13249599000645]
Noninvasive diagnosis like ultrasound (US) imaging plays a very important role in automatic liver fibrosis diagnosis (ALFD)
Due to the noisy data, expensive annotations of US images, the application of Artificial Intelligence (AI) assisting approaches encounters a bottleneck.
In this work, we innovatively propose a multi-modal fusion network with active learning (MMFN-AL) for ALFD to exploit the information of multiple modalities.
arXiv Detail & Related papers (2020-11-02T03:05:24Z) - Hybrid Attention for Automatic Segmentation of Whole Fetal Head in
Prenatal Ultrasound Volumes [52.53375964591765]
We propose the first fully-automated solution to segment the whole fetal head in US volumes.
The segmentation task is firstly formulated as an end-to-end volumetric mapping under an encoder-decoder deep architecture.
We then combine the segmentor with a proposed hybrid attention scheme (HAS) to select discriminative features and suppress the non-informative volumetric features.
arXiv Detail & Related papers (2020-04-28T14:43:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.