Improved cystic hygroma detection from prenatal imaging using ultrasound-specific self-supervised representation learning
- URL: http://arxiv.org/abs/2512.22730v1
- Date: Sun, 28 Dec 2025 00:07:26 GMT
- Title: Improved cystic hygroma detection from prenatal imaging using ultrasound-specific self-supervised representation learning
- Authors: Youssef Megahed, Robin Ducharme, Inok Lee, Inbal Willner, Olivier X. Miguel, Kevin Dick, Adrian D. C. Chan, Mark Walker, Steven Hawken,
- Abstract summary: Cystic hygroma is a high-risk prenatal ultrasound finding that portends high rates of chromosomal abnormalities, structural malformations, and adverse pregnancy outcomes.<n>This study assesses whether ultrasound-specific self-supervised pretraining can facilitate accurate, robust deep learning detection of cystic hygroma in first-trimester ultrasound images.
- Score: 0.18058404137575482
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cystic hygroma is a high-risk prenatal ultrasound finding that portends high rates of chromosomal abnormalities, structural malformations, and adverse pregnancy outcomes. Automated detection can increase reproducibility and support scalable early screening programs, but supervised deep learning methods are limited by small labelled datasets. This study assesses whether ultrasound-specific self-supervised pretraining can facilitate accurate, robust deep learning detection of cystic hygroma in first-trimester ultrasound images. We fine-tuned the Ultrasound Self-Supervised Foundation Model with Masked Autoencoding (USF-MAE), pretrained on over 370,000 unlabelled ultrasound images, for binary classification of normal controls and cystic hygroma cases used in this study. Performance was evaluated on the same curated ultrasound dataset, preprocessing pipeline, and 4-fold cross-validation protocol as for the DenseNet-169 baseline, using accuracy, sensitivity, specificity, and the area under the receiver operating characteristic curve (ROC-AUC). Model interpretability was analyzed qualitatively using Score-CAM visualizations. USF-MAE outperformed the DenseNet-169 baseline on all evaluation metrics. The proposed model yielded a mean accuracy of 0.96, sensitivity of 0.94, specificity of 0.98, and ROC-AUC of 0.98 compared to 0.93, 0.92, 0.94, and 0.94 for the DenseNet-169 baseline, respectively. Qualitative Score-CAM visualizations of model predictions demonstrated clinical relevance by highlighting expected regions in the fetal neck for both positive and negative cases. Paired statistical analysis using a Wilcoxon signed-rank test confirmed that performance improvements achieved by USF-MAE were statistically significant (p = 0.0057).
Related papers
- Automated Classification of First-Trimester Fetal Heart Views Using Ultrasound-Specific Self-Supervised Learning [0.205246094017924]
We evaluate a self-supervised ultrasound foundation model, USF-MAE, for first-trimester fetal heart view classification.<n> USF-MAE is pretrained using masked autoencoding modelling on more than 370,000 unlabelled ultrasound images.<n>It achieved the highest performance across all evaluation metrics, with 90.57% accuracy, 91.15% precision, 90.57% recall, and 90.71% F1-score.
arXiv Detail & Related papers (2025-12-30T22:24:26Z) - Self-Supervised Ultrasound Representation Learning for Renal Anomaly Prediction in Prenatal Imaging [0.19544534628180868]
We assessed the performance of a self-supervised ultrasound foundation model for automated fetal renal anomaly classification.<n>Models were compared with a DenseNet-169 convolutional baseline using cross-validation and an independent test set.<n>The largest gains were observed in the multi-class setting, where the improvement in AUC was 16.28% and 46.15% in F1-score.
arXiv Detail & Related papers (2025-12-15T15:28:02Z) - Deep Unsupervised Anomaly Detection in Brain Imaging: Large-Scale Benchmarking and Bias Analysis [42.60508892284938]
We present a large-scale, multi-center benchmark of deep unsupervised anomaly detection for brain imaging.<n>We tested 2,221 T1w and 1,262 T2w scans spanning healthy datasets and diverse clinical cohorts.<n>Our benchmark establishes a transparent foundation for future research and highlights priorities for clinical translation.
arXiv Detail & Related papers (2025-12-01T11:03:27Z) - Deep Learning Analysis of Prenatal Ultrasound for Identification of Ventriculomegaly [0.17476892297485447]
Ventriculomegaly is a prenatal condition characterized by dilated cerebral ventricles of the fetal brain.<n>The proposed model incorporates a Vision Transformer encoder pretrained on more than 370,000 ultrasound images from the OpenUS-46 corpus.<n>The model reached an F1-score of 91.76% on the 5-fold cross-validation and 91.78% on the independent test set.
arXiv Detail & Related papers (2025-11-11T04:45:48Z) - Validating Vision Transformers for Otoscopy: Performance and Data-Leakage Effects [42.465094107111646]
This study evaluates the efficacy of vision transformer models, specifically Swin transformers, in enhancing the diagnostic accuracy of ear diseases.<n>The research utilised a real-world dataset from the Department of Otolaryngology at the Clinical Hospital of the Universidad de Chile.
arXiv Detail & Related papers (2025-11-06T23:20:37Z) - An Automatic Detection Method for Hematoma Features in Placental Abruption Ultrasound Images Based on Few-Shot Learning [11.678844582870523]
Placental abruption is a severe complication during pregnancy, and its early accurate diagnosis is crucial for ensuring maternal and fetal safety.<n>This paper proposes an improved model, EH-YOLOv11n, based on small-sample learning, aiming to achieve automatic detection of hematoma features in placental ultrasound images.<n> Experimental results demonstrate a detection accuracy of 78%, representing a 2.5% improvement over YOLOv11n and a 13.7% increase over YOLOv8.
arXiv Detail & Related papers (2025-10-24T14:20:34Z) - Bridging Accuracy and Interpretability: Deep Learning with XAI for Breast Cancer Detection [0.0]
We present an interpretable deep learning framework for the early detection of breast cancer using quantitative features extracted from digitized fine needle aspirate (FNA) images of breast masses.<n>Our deep neural network, using ReLU activations, the Adam visualizations, and a binary cross-entropy loss, delivers state-of-the-art classification performance.
arXiv Detail & Related papers (2025-10-18T07:47:26Z) - A Novel Attention-Augmented Wavelet YOLO System for Real-time Brain Vessel Segmentation on Transcranial Color-coded Doppler [49.03919553747297]
We propose an AI-powered, real-time CoW auto-segmentation system capable of efficiently capturing cerebral arteries.<n>No prior studies have explored AI-driven cerebrovascular segmentation using Transcranial Color-coded Doppler (TCCD)<n>The proposed AAW-YOLO demonstrated strong performance in segmenting both ipsilateral and contralateral CoW vessels.
arXiv Detail & Related papers (2025-08-19T14:41:22Z) - Enhancing Diagnostic Reliability of Foundation Model with Uncertainty Estimation in OCT Images [41.002573031087856]
We developed a foundation model with uncertainty estimation (FMUE) to detect 11 retinal conditions on optical coherence tomography ( OCT)
FMUE achieved a higher F1 score of 96.76% than two state-of-the-art algorithms, RETFound and UIOS, and got further improvement with thresholding strategy to 98.44%.
Our model is superior to two ophthalmologists with a higher F1 score (95.17% vs. 61.93% &71.72%)
arXiv Detail & Related papers (2024-06-18T03:04:52Z) - Uncertainty-inspired Open Set Learning for Retinal Anomaly
Identification [71.06194656633447]
We establish an uncertainty-inspired open-set (UIOS) model, which was trained with fundus images of 9 retinal conditions.
Our UIOS model with thresholding strategy achieved an F1 score of 99.55%, 97.01% and 91.91% for the internal testing set.
UIOS correctly predicted high uncertainty scores, which would prompt the need for a manual check in the datasets of non-target categories retinal diseases, low-quality fundus images, and non-fundus images.
arXiv Detail & Related papers (2023-04-08T10:47:41Z) - Learning to diagnose cirrhosis from radiological and histological labels
with joint self and weakly-supervised pretraining strategies [62.840338941861134]
We propose to leverage transfer learning from large datasets annotated by radiologists, to predict the histological score available on a small annex dataset.
We compare different pretraining methods, namely weakly-supervised and self-supervised ones, to improve the prediction of the cirrhosis.
This method outperforms the baseline classification of the METAVIR score, reaching an AUC of 0.84 and a balanced accuracy of 0.75.
arXiv Detail & Related papers (2023-02-16T17:06:23Z) - Self-supervised contrastive learning of echocardiogram videos enables
label-efficient cardiac disease diagnosis [48.64462717254158]
We developed a self-supervised contrastive learning approach, EchoCLR, to catered to echocardiogram videos.
When fine-tuned on small portions of labeled data, EchoCLR pretraining significantly improved classification performance for left ventricular hypertrophy (LVH) and aortic stenosis (AS)
EchoCLR is unique in its ability to learn representations of medical videos and demonstrates that SSL can enable label-efficient disease classification from small, labeled datasets.
arXiv Detail & Related papers (2022-07-23T19:17:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.