Maternal and Fetal Health Status Assessment by Using Machine Learning on Optical 3D Body Scans
- URL: http://arxiv.org/abs/2504.05627v1
- Date: Tue, 08 Apr 2025 03:02:26 GMT
- Title: Maternal and Fetal Health Status Assessment by Using Machine Learning on Optical 3D Body Scans
- Authors: Ruting Cheng, Yijiang Zheng, Boyuan Feng, Chuhui Qiu, Zhuoxin Long, Joaquin A. Calderon, Xiaoke Zhang, Jaclyn M. Phillips, James K. Hahn,
- Abstract summary: This study explores the potential of 3D body scan data, captured during the 18-24 gestational weeks, to predict adverse pregnancy outcomes.<n>We developed a novel algorithm with two parallel streams which are used for extract body shape features.<n>Our results indicate that 3D body shape can assist in predicting preterm labor, gestational diabetes mellitus (GDM), gestational hypertension (GH) and in estimating fetal weight.
- Score: 3.153771294026575
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Monitoring maternal and fetal health during pregnancy is crucial for preventing adverse outcomes. While tests such as ultrasound scans offer high accuracy, they can be costly and inconvenient. Telehealth and more accessible body shape information provide pregnant women with a convenient way to monitor their health. This study explores the potential of 3D body scan data, captured during the 18-24 gestational weeks, to predict adverse pregnancy outcomes and estimate clinical parameters. We developed a novel algorithm with two parallel streams which are used for extract body shape features: one for supervised learning to extract sequential abdominal circumference information, and another for unsupervised learning to extract global shape descriptors, alongside a branch for demographic data. Our results indicate that 3D body shape can assist in predicting preterm labor, gestational diabetes mellitus (GDM), gestational hypertension (GH), and in estimating fetal weight. Compared to other machine learning models, our algorithm achieved the best performance, with prediction accuracies exceeding 88% and fetal weight estimation accuracy of 76.74% within a 10% error margin, outperforming conventional anthropometric methods by 22.22%.
Related papers
- Time-to-Event Pretraining for 3D Medical Imaging [44.46415168541444]
We introduce time-to-event pretraining, a pretraining framework for 3D medical imaging models.
We use a dataset of 18,945 CT scans (4.2 million 2D images) and time-to-event distributions across thousands of EHR-derived tasks.
Our method improves outcome prediction, achieving an average AUROC increase of 23.7% and a 29.4% gain in Harrell's C-index across 8 benchmark tasks.
arXiv Detail & Related papers (2024-11-14T11:08:54Z) - Unveiling the Unborn: Advancing Fetal Health Classification through Machine Learning [0.0]
This research paper presents a novel machine-learning approach for fetal health classification.
The proposed model achieves an impressive accuracy of 98.31% on a test set.
By incorporating multiple data points, our model offers a more holistic and reliable evaluation.
arXiv Detail & Related papers (2023-09-30T22:02:51Z) - Body Fat Estimation from Surface Meshes using Graph Neural Networks [48.85291874087541]
We show that triangulated body surface meshes can be used to accurately predict VAT and ASAT volumes using graph neural networks.
Our methods achieve high performance while reducing training time and required resources compared to state-of-the-art convolutional neural networks in this area.
arXiv Detail & Related papers (2023-07-13T10:21:34Z) - Predicting Adverse Neonatal Outcomes for Preterm Neonates with
Multi-Task Learning [51.487856868285995]
We first analyze the correlations between three adverse neonatal outcomes and then formulate the diagnosis of multiple neonatal outcomes as a multi-task learning (MTL) problem.
In particular, the MTL framework contains shared hidden layers and multiple task-specific branches.
arXiv Detail & Related papers (2023-03-28T00:44:06Z) - FPUS23: An Ultrasound Fetus Phantom Dataset with Deep Neural Network
Evaluations for Fetus Orientations, Fetal Planes, and Anatomical Features [10.404128105946583]
We present a novel fetus phantom ultrasound dataset, FPUS23, which can be used to identify the correct diagnostic planes for estimating fetal biometric values.
The entire dataset is composed of 15,728 images, which are used to train four different Deep Neural Network models.
We have also evaluated the models trained using our FPUS23 dataset, to show that the information learned by these models can be used to substantially increase the accuracy on real-world ultrasound fetus datasets.
arXiv Detail & Related papers (2023-03-14T12:46:48Z) - Localizing Scan Targets from Human Pose for Autonomous Lung Ultrasound
Imaging [61.60067283680348]
With the advent of COVID-19 global pandemic, there is a need to fully automate ultrasound imaging.
We propose a vision-based, data driven method that incorporates learning-based computer vision techniques.
Our method attains an accuracy level of 15.52 (9.47) mm for probe positioning and 4.32 (3.69)deg for probe orientation, with a success rate above 80% under an error threshold of 25mm for all scan targets.
arXiv Detail & Related papers (2022-12-15T14:34:12Z) - BabyNet: Residual Transformer Module for Birth Weight Prediction on
Fetal Ultrasound Video [8.468600443532413]
We propose the Residual Transformer Module which extends a 3D ResNet-based network for analysis of 2D+t-temporal ultrasound video scans.
Our end-to-end method, called BabyNet, automatically predicts fetal birth weight based on fetal ultrasound video scans.
arXiv Detail & Related papers (2022-05-19T08:27:23Z) - AutoFB: Automating Fetal Biometry Estimation from Standard Ultrasound
Planes [10.745788530692305]
The proposed framework semantically segments the key fetal anatomies using state-of-the-art segmentation models.
We show that the network with the best segmentation performance tends to be more accurate for biometry estimation.
arXiv Detail & Related papers (2021-07-12T08:42:31Z) - Hybrid Attention for Automatic Segmentation of Whole Fetal Head in
Prenatal Ultrasound Volumes [52.53375964591765]
We propose the first fully-automated solution to segment the whole fetal head in US volumes.
The segmentation task is firstly formulated as an end-to-end volumetric mapping under an encoder-decoder deep architecture.
We then combine the segmentor with a proposed hybrid attention scheme (HAS) to select discriminative features and suppress the non-informative volumetric features.
arXiv Detail & Related papers (2020-04-28T14:43:05Z) - FetusMap: Fetal Pose Estimation in 3D Ultrasound [42.59502360552173]
We propose to estimate the 3D pose of fetus in US volumes to facilitate its quantitative analyses.
This is the first work about 3D pose estimation of fetus in the literature.
We propose a self-supervised learning (SSL) framework to finetune the deep network to form visually plausible pose predictions.
arXiv Detail & Related papers (2019-10-11T01:45:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.