BabyNet: Residual Transformer Module for Birth Weight Prediction on
Fetal Ultrasound Video
- URL: http://arxiv.org/abs/2205.09382v1
- Date: Thu, 19 May 2022 08:27:23 GMT
- Title: BabyNet: Residual Transformer Module for Birth Weight Prediction on
Fetal Ultrasound Video
- Authors: Szymon P{\l}otka, Micha{\l} K. Grzeszczyk, Robert
Brawura-Biskupski-Samaha, Pawe{\l} Gutaj, Micha{\l} Lipa, Tomasz Trzci\'nski,
Arkadiusz Sitek
- Abstract summary: We propose the Residual Transformer Module which extends a 3D ResNet-based network for analysis of 2D+t-temporal ultrasound video scans.
Our end-to-end method, called BabyNet, automatically predicts fetal birth weight based on fetal ultrasound video scans.
- Score: 8.468600443532413
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Predicting fetal weight at birth is an important aspect of perinatal care,
particularly in the context of antenatal management, which includes the planned
timing and the mode of delivery. Accurate prediction of weight using prenatal
ultrasound is challenging as it requires images of specific fetal body parts
during advanced pregnancy which is difficult to capture due to poor quality of
images caused by the lack of amniotic fluid. As a consequence, predictions
which rely on standard methods often suffer from significant errors. In this
paper we propose the Residual Transformer Module which extends a 3D
ResNet-based network for analysis of 2D+t spatio-temporal ultrasound video
scans. Our end-to-end method, called BabyNet, automatically predicts fetal
birth weight based on fetal ultrasound video scans. We evaluate BabyNet using a
dedicated clinical set comprising 225 2D fetal ultrasound videos of pregnancies
from 75 patients performed one day prior to delivery. Experimental results show
that BabyNet outperforms several state-of-the-art methods and estimates the
weight at birth with accuracy comparable to human experts. Furthermore,
combining estimates provided by human experts with those computed by BabyNet
yields the best results, outperforming either of other methods by a significant
margin. The source code of BabyNet is available at
https://github.com/SanoScience/BabyNet.
Related papers
- Predicting Adverse Neonatal Outcomes for Preterm Neonates with
Multi-Task Learning [51.487856868285995]
We first analyze the correlations between three adverse neonatal outcomes and then formulate the diagnosis of multiple neonatal outcomes as a multi-task learning (MTL) problem.
In particular, the MTL framework contains shared hidden layers and multiple task-specific branches.
arXiv Detail & Related papers (2023-03-28T00:44:06Z) - Localizing Scan Targets from Human Pose for Autonomous Lung Ultrasound
Imaging [61.60067283680348]
With the advent of COVID-19 global pandemic, there is a need to fully automate ultrasound imaging.
We propose a vision-based, data driven method that incorporates learning-based computer vision techniques.
Our method attains an accuracy level of 15.52 (9.47) mm for probe positioning and 4.32 (3.69)deg for probe orientation, with a success rate above 80% under an error threshold of 25mm for all scan targets.
arXiv Detail & Related papers (2022-12-15T14:34:12Z) - Deep Learning Fetal Ultrasound Video Model Match Human Observers in
Biometric Measurements [8.468600443532413]
This work investigates the use of deep convolutional neural networks (CNN) to automatically perform measurements of fetal body parts.
The observed differences in measurement values were within the range inter- and intra-observer variability.
We argue that FUVAI has the potential to assist sonographers who perform fetal biometric measurements in clinical settings.
arXiv Detail & Related papers (2022-05-27T09:00:19Z) - Enabling faster and more reliable sonographic assessment of gestational
age through machine learning [1.3238745915345225]
Fetal ultrasounds are an essential part of prenatal care and can be used to estimate gestational age (GA)
We developed three AI models: an image model using standard plane images, a video model using fly-to videos, and an ensemble model (combining both image and video)
All three were statistically superior to standard fetal biometry-based GA estimates derived by expert sonographers.
arXiv Detail & Related papers (2022-03-22T17:15:56Z) - FetalNet: Multi-task deep learning framework for fetal ultrasound
biometric measurements [11.364211664829567]
We propose an end-to-end multi-task neural network called FetalNet with an attention mechanism and stacked module for fetal ultrasound scan video analysis.
The main goal in fetal ultrasound video analysis is to find proper standard planes to measure the fetal head, abdomen and femur.
Our method called FetalNet outperforms existing state-of-the-art methods in both classification and segmentation in fetal ultrasound video recordings.
arXiv Detail & Related papers (2021-07-14T19:13:33Z) - AutoFB: Automating Fetal Biometry Estimation from Standard Ultrasound
Planes [10.745788530692305]
The proposed framework semantically segments the key fetal anatomies using state-of-the-art segmentation models.
We show that the network with the best segmentation performance tends to be more accurate for biometry estimation.
arXiv Detail & Related papers (2021-07-12T08:42:31Z) - Wide & Deep neural network model for patch aggregation in CNN-based
prostate cancer detection systems [51.19354417900591]
Prostate cancer (PCa) is one of the leading causes of death among men, with almost 1.41 million new cases and around 375,000 deaths in 2020.
To perform an automatic diagnosis, prostate tissue samples are first digitized into gigapixel-resolution whole-slide images.
Small subimages called patches are extracted and predicted, obtaining a patch-level classification.
arXiv Detail & Related papers (2021-05-20T18:13:58Z) - Spontaneous preterm birth prediction using convolutional neural networks [8.47519763941156]
An estimated 15 million babies are born too early every year.
Approximately 1 million children die each year due to complications of preterm birth (PTB)
arXiv Detail & Related papers (2020-08-16T21:21:33Z) - M2Net: Multi-modal Multi-channel Network for Overall Survival Time
Prediction of Brain Tumor Patients [151.4352001822956]
Early and accurate prediction of overall survival (OS) time can help to obtain better treatment planning for brain tumor patients.
Existing prediction methods rely on radiomic features at the local lesion area of a magnetic resonance (MR) volume.
We propose an end-to-end OS time prediction model; namely, Multi-modal Multi-channel Network (M2Net)
arXiv Detail & Related papers (2020-06-01T05:21:37Z) - Hybrid Attention for Automatic Segmentation of Whole Fetal Head in
Prenatal Ultrasound Volumes [52.53375964591765]
We propose the first fully-automated solution to segment the whole fetal head in US volumes.
The segmentation task is firstly formulated as an end-to-end volumetric mapping under an encoder-decoder deep architecture.
We then combine the segmentor with a proposed hybrid attention scheme (HAS) to select discriminative features and suppress the non-informative volumetric features.
arXiv Detail & Related papers (2020-04-28T14:43:05Z) - FetusMap: Fetal Pose Estimation in 3D Ultrasound [42.59502360552173]
We propose to estimate the 3D pose of fetus in US volumes to facilitate its quantitative analyses.
This is the first work about 3D pose estimation of fetus in the literature.
We propose a self-supervised learning (SSL) framework to finetune the deep network to form visually plausible pose predictions.
arXiv Detail & Related papers (2019-10-11T01:45:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.