Deep Learning Analysis of Prenatal Ultrasound for Identification of Ventriculomegaly
- URL: http://arxiv.org/abs/2511.07827v1
- Date: Wed, 12 Nov 2025 01:22:17 GMT
- Title: Deep Learning Analysis of Prenatal Ultrasound for Identification of Ventriculomegaly
- Authors: Youssef Megahed, Inok Lee, Robin Ducharme, Aylin Erman, Olivier X. Miguel, Kevin Dick, Adrian D. C. Chan, Steven Hawken, Mark Walker, Felipe Moretti,
- Abstract summary: Ventriculomegaly is a prenatal condition characterized by dilated cerebral ventricles of the fetal brain.<n>The proposed model incorporates a Vision Transformer encoder pretrained on more than 370,000 ultrasound images from the OpenUS-46 corpus.<n>The model reached an F1-score of 91.76% on the 5-fold cross-validation and 91.78% on the independent test set.
- Score: 0.17476892297485447
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The proposed study aimed to develop a deep learning model capable of detecting ventriculomegaly on prenatal ultrasound images. Ventriculomegaly is a prenatal condition characterized by dilated cerebral ventricles of the fetal brain and is important to diagnose early, as it can be associated with an increased risk for fetal aneuploidies and/or underlying genetic syndromes. An Ultrasound Self-Supervised Foundation Model with Masked Autoencoding (USF-MAE), recently developed by our group, was fine-tuned for a binary classification task to distinguish fetal brain ultrasound images as either normal or showing ventriculomegaly. The USF-MAE incorporates a Vision Transformer encoder pretrained on more than 370,000 ultrasound images from the OpenUS-46 corpus. For this study, the pretrained encoder was adapted and fine-tuned on a curated dataset of fetal brain ultrasound images to optimize its performance for ventriculomegaly detection. Model evaluation was conducted using 5-fold cross-validation and an independent test cohort, and performance was quantified using accuracy, precision, recall, specificity, F1-score, and area under the receiver operating characteristic curve (AUC). The proposed USF-MAE model reached an F1-score of 91.76% on the 5-fold cross-validation and 91.78% on the independent test set, with much higher scores than those obtained by the baseline models by 19.37% and 16.15% compared to VGG-19, 2.31% and 2.56% compared to ResNet-50, and 5.03% and 11.93% compared to ViT-B/16, respectively. The model also showed a high mean test precision of 94.47% and an accuracy of 97.24%. The Eigen-CAM (Eigen Class Activation Map) heatmaps showed that the model was focusing on the ventricle area for the diagnosis of ventriculomegaly, which has explainability and clinical plausibility.
Related papers
- EchoJEPA: A Latent Predictive Foundation Model for Echocardiography [1.2525723985884272]
We present EchoJEPA, a foundation model trained on 18 million echocardiograms across 300K patients.<n>By leveraging a latent predictive objective, EchoJEPA learns robust anatomical representations that ignore speckle noise.
arXiv Detail & Related papers (2026-02-02T01:34:57Z) - Automated Classification of First-Trimester Fetal Heart Views Using Ultrasound-Specific Self-Supervised Learning [0.205246094017924]
We evaluate a self-supervised ultrasound foundation model, USF-MAE, for first-trimester fetal heart view classification.<n> USF-MAE is pretrained using masked autoencoding modelling on more than 370,000 unlabelled ultrasound images.<n>It achieved the highest performance across all evaluation metrics, with 90.57% accuracy, 91.15% precision, 90.57% recall, and 90.71% F1-score.
arXiv Detail & Related papers (2025-12-30T22:24:26Z) - Improved cystic hygroma detection from prenatal imaging using ultrasound-specific self-supervised representation learning [0.18058404137575482]
Cystic hygroma is a high-risk prenatal ultrasound finding that portends high rates of chromosomal abnormalities, structural malformations, and adverse pregnancy outcomes.<n>This study assesses whether ultrasound-specific self-supervised pretraining can facilitate accurate, robust deep learning detection of cystic hygroma in first-trimester ultrasound images.
arXiv Detail & Related papers (2025-12-28T00:07:26Z) - Self-Supervised Ultrasound Representation Learning for Renal Anomaly Prediction in Prenatal Imaging [0.19544534628180868]
We assessed the performance of a self-supervised ultrasound foundation model for automated fetal renal anomaly classification.<n>Models were compared with a DenseNet-169 convolutional baseline using cross-validation and an independent test set.<n>The largest gains were observed in the multi-class setting, where the improvement in AUC was 16.28% and 46.15% in F1-score.
arXiv Detail & Related papers (2025-12-15T15:28:02Z) - Challenging DINOv3 Foundation Model under Low Inter-Class Variability: A Case Study on Fetal Brain Ultrasound [4.07447364754644]
This study provides the first comprehensive evaluation of foundation models in fetal ultrasound (US) imaging under low interclass variability conditions.<n>We focus on fetal brain standard planes--transthalamic (TT), transventricular (TV), and transcerebellar (TC)--which exhibit highly overlapping anatomical features.<n>Models pretrained on fetal ultrasound data consistently outperformed those on natural images, with weighted F1-score improvements of up to 20 percent.
arXiv Detail & Related papers (2025-11-01T13:37:22Z) - USF-MAE: Ultrasound Self-Supervised Foundation Model with Masked Autoencoding [0.205246094017924]
We introduce the Ultrasound Self-Supervised Foundation Model with Masked Autoencoding (USF-MAE)<n>USF-MAE is the first large-scale self-supervised MAE framework pretrained exclusively on ultrasound data.<n>The model was pre-trained on 370,000 2D and 3D ultrasound images curated from 46 open-source datasets.
arXiv Detail & Related papers (2025-10-27T04:16:43Z) - An Automatic Detection Method for Hematoma Features in Placental Abruption Ultrasound Images Based on Few-Shot Learning [11.678844582870523]
Placental abruption is a severe complication during pregnancy, and its early accurate diagnosis is crucial for ensuring maternal and fetal safety.<n>This paper proposes an improved model, EH-YOLOv11n, based on small-sample learning, aiming to achieve automatic detection of hematoma features in placental ultrasound images.<n> Experimental results demonstrate a detection accuracy of 78%, representing a 2.5% improvement over YOLOv11n and a 13.7% increase over YOLOv8.
arXiv Detail & Related papers (2025-10-24T14:20:34Z) - Epistemic-aware Vision-Language Foundation Model for Fetal Ultrasound Interpretation [83.02147613524032]
We introduce FetalMind, a medical AI system tailored to fetal ultrasound for both report generation and diagnosis.<n>We propose Salient Epistemic Disentanglement (SED), which injects an expert-curated bipartite graph into the model to decouple view-disease associations.<n>FetalMind outperforms open- and closed-source baselines across all gestational stages, achieving +14% average gains and +61.2% higher accuracy on critical conditions.
arXiv Detail & Related papers (2025-10-14T19:57:03Z) - Brain Tumor Classification on MRI in Light of Molecular Markers [56.99710477905796]
Co-deletion of the 1p/19q gene is associated with clinical outcomes in low-grade gliomas.<n>This study aims to utilize a specially MRI-based convolutional neural network for brain cancer detection.
arXiv Detail & Related papers (2024-09-29T07:04:26Z) - Analysis of the BraTS 2023 Intracranial Meningioma Segmentation Challenge [44.76736949127792]
We describe the design and results from the BraTS 2023 Intracranial Meningioma Challenge.<n>The BraTS Meningioma Challenge differed from prior BraTS Glioma challenges in that it focused on meningiomas.<n>The top ranked team had a lesion-wise median dice similarity coefficient (DSC) of 0.976, 0.976, and 0.964 for enhancing tumor, tumor core, and whole tumor.
arXiv Detail & Related papers (2024-05-16T03:23:57Z) - Atrial Septal Defect Detection in Children Based on Ultrasound Video
Using Multiple Instances Learning [14.62565592495898]
This paper aims to study a deep learning method based on cardiac ultrasound video to assist in atrial septal defect diagnosis.
We select two standard views of the atrial septum (subAS) and low parasternal four-compartment view (LPS4C) as the two views to identify ASD.
For ASD detection, we achieve 89.33 AUC, 84.95 accuracy, 85.70 sensitivity, 81.51 specificity and 81.99 F1 score.
arXiv Detail & Related papers (2023-06-06T16:25:29Z) - Investigating Pulse-Echo Sound Speed Estimation in Breast Ultrasound
with Deep Learning [44.70495434283752]
We propose a deep-learning approach for sound speed estimation from in-phase and quadrature ultrasound signals.
We develop a large-scale simulated ultrasound dataset that generates quasi-realistic breast tissue.
We evaluate the model on simulated, phantom, and in-vivo breast ultrasound data.
arXiv Detail & Related papers (2023-02-06T19:02:44Z) - Localizing Scan Targets from Human Pose for Autonomous Lung Ultrasound
Imaging [61.60067283680348]
With the advent of COVID-19 global pandemic, there is a need to fully automate ultrasound imaging.
We propose a vision-based, data driven method that incorporates learning-based computer vision techniques.
Our method attains an accuracy level of 15.52 (9.47) mm for probe positioning and 4.32 (3.69)deg for probe orientation, with a success rate above 80% under an error threshold of 25mm for all scan targets.
arXiv Detail & Related papers (2022-12-15T14:34:12Z) - Segmentation of the Myocardium on Late-Gadolinium Enhanced MRI based on
2.5 D Residual Squeeze and Excitation Deep Learning Model [55.09533240649176]
The aim of this work is to develop an accurate automatic segmentation method based on deep learning models for the myocardial borders on LGE-MRI.
A total number of 320 exams (with a mean number of 6 slices per exam) were used for training and 28 exams used for testing.
The performance analysis of the proposed ensemble model in the basal and middle slices was similar as compared to intra-observer study and slightly lower at apical slices.
arXiv Detail & Related papers (2020-05-27T20:44:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.