Towards Objective Obstetric Ultrasound Assessment: Contrastive Representation Learning for Fetal Movement Detection
- URL: http://arxiv.org/abs/2510.20214v1
- Date: Thu, 23 Oct 2025 05:03:23 GMT
- Title: Towards Objective Obstetric Ultrasound Assessment: Contrastive Representation Learning for Fetal Movement Detection
- Authors: Talha Ilyas, Duong Nhu, Allison Thomas, Arie Levin, Lim Wei Yap, Shu Gong, David Vera Anaya, Yiwen Jiang, Deval Mehta, Ritesh Warty, Vinayak Smith, Maya Reddy, Euan Wallace, Wenlong Cheng, Zongyuan Ge, Faezeh Marzbanrad,
- Abstract summary: We propose Contrastive Ultrasound Video Representation Learning (CURL), a novel self-supervised learning framework for fetal movement analysis.<n>CURL achieves a sensitivity of 78.01% and an AUROC of 81.60%, demonstrating its potential for reliable and objective FM analysis.
- Score: 11.146143798185927
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate fetal movement (FM) detection is essential for assessing prenatal health, as abnormal movement patterns can indicate underlying complications such as placental dysfunction or fetal distress. Traditional methods, including maternal perception and cardiotocography (CTG), suffer from subjectivity and limited accuracy. To address these challenges, we propose Contrastive Ultrasound Video Representation Learning (CURL), a novel self-supervised learning framework for FM detection from extended fetal ultrasound video recordings. Our approach leverages a dual-contrastive loss, incorporating both spatial and temporal contrastive learning, to learn robust motion representations. Additionally, we introduce a task-specific sampling strategy, ensuring the effective separation of movement and non-movement segments during self-supervised training, while enabling flexible inference on arbitrarily long ultrasound recordings through a probabilistic fine-tuning approach. Evaluated on an in-house dataset of 92 subjects, each with 30-minute ultrasound sessions, CURL achieves a sensitivity of 78.01% and an AUROC of 81.60%, demonstrating its potential for reliable and objective FM analysis. These results highlight the potential of self-supervised contrastive learning for fetal movement analysis, paving the way for improved prenatal monitoring and clinical decision-making.
Related papers
- Beyond Benchmarks of IUGC: Rethinking Requirements of Deep Learning Methods for Intrapartum Ultrasound Biometry from Fetal Ultrasound Videos [58.71502465551297]
Intrapartum Ultrasound Grand Challenge (IUGC) co-hosted with MICCAI 2024 was launched.<n>IUGC introduces a clinically oriented multi-task automatic measurement framework that integrates standard plane classification, fetal head-pubic symphysis segmentation, and biometry.<n>The challenge releases the largest multi-center intrapartum ultrasound video dataset to date, comprising 774 videos (68,106 frames) collected from three hospitals.
arXiv Detail & Related papers (2026-02-13T13:28:22Z) - An Automatic Detection Method for Hematoma Features in Placental Abruption Ultrasound Images Based on Few-Shot Learning [11.678844582870523]
Placental abruption is a severe complication during pregnancy, and its early accurate diagnosis is crucial for ensuring maternal and fetal safety.<n>This paper proposes an improved model, EH-YOLOv11n, based on small-sample learning, aiming to achieve automatic detection of hematoma features in placental ultrasound images.<n> Experimental results demonstrate a detection accuracy of 78%, representing a 2.5% improvement over YOLOv11n and a 13.7% increase over YOLOv8.
arXiv Detail & Related papers (2025-10-24T14:20:34Z) - Epistemic-aware Vision-Language Foundation Model for Fetal Ultrasound Interpretation [83.02147613524032]
We introduce FetalMind, a medical AI system tailored to fetal ultrasound for both report generation and diagnosis.<n>We propose Salient Epistemic Disentanglement (SED), which injects an expert-curated bipartite graph into the model to decouple view-disease associations.<n>FetalMind outperforms open- and closed-source baselines across all gestational stages, achieving +14% average gains and +61.2% higher accuracy on critical conditions.
arXiv Detail & Related papers (2025-10-14T19:57:03Z) - FHRFormer: A Self-supervised Transformer Approach for Fetal Heart Rate Inpainting and Forecasting [0.34202935599316514]
Fetal heart rate (FHR) monitoring plays a crucial role in assessing fetal well-being during prenatal care.<n>Applying artificial intelligence (AI) methods to analyze large datasets of continuous FHR monitoring episodes may offer novel insights into predicting the risk of needing breathing assistance or interventions.<n>We propose a masked transformer-based autoencoder approach to reconstruct missing FHR signals by capturing both spatial and frequency components of the data.
arXiv Detail & Related papers (2025-09-25T07:40:21Z) - Determining Fetal Orientations From Blind Sweep Ultrasound Video [1.3456699275044242]
The work distinguishes itself by introducing automated fetal lie prediction and by proposing an assistive paradigm that augments sonographer expertise rather than replacing it.<n>Future research will focus on enhancing acquisition efficiency, and exploring real-time clinical integration to improve workflow and support for obstetric clinicians.
arXiv Detail & Related papers (2025-04-09T12:51:15Z) - Goal-conditioned reinforcement learning for ultrasound navigation guidance [4.648318344224063]
We propose a novel ultrasound navigation assistance method based on contrastive learning as goal-conditioned reinforcement learning (G)
We augment the previous framework using a novel contrastive patient method (CPB) and a data-augmented contrastive loss.
Our method was developed with a large dataset of 789 patients and obtained an average error of 6.56 mm in position and 9.36 degrees in angle.
arXiv Detail & Related papers (2024-05-02T16:01:58Z) - A Survey of the Impact of Self-Supervised Pretraining for Diagnostic
Tasks with Radiological Images [71.26717896083433]
Self-supervised pretraining has been observed to be effective at improving feature representations for transfer learning.
This review summarizes recent research into its usage in X-ray, computed tomography, magnetic resonance, and ultrasound imaging.
arXiv Detail & Related papers (2023-09-05T19:45:09Z) - A Deep Learning Approach to Predicting Collateral Flow in Stroke
Patients Using Radiomic Features from Perfusion Images [58.17507437526425]
Collateral circulation results from specialized anastomotic channels which provide oxygenated blood to regions with compromised blood flow.
The actual grading is mostly done through manual inspection of the acquired images.
We present a deep learning approach to predicting collateral flow grading in stroke patients based on radiomic features extracted from MR perfusion data.
arXiv Detail & Related papers (2021-10-24T18:58:40Z) - Deep learning in the ultrasound evaluation of neonatal respiratory
status [11.308283140003676]
Lung ultrasound imaging is reaching growing interest from the scientific community.
Image analysis and pattern recognition approaches have proven their ability to fully exploit the rich information contained in these data.
We present a thorough analysis of recent deep learning networks and training strategies carried out on a vast and challenging multicenter dataset.
arXiv Detail & Related papers (2020-10-31T18:57:55Z) - Hybrid Attention for Automatic Segmentation of Whole Fetal Head in
Prenatal Ultrasound Volumes [52.53375964591765]
We propose the first fully-automated solution to segment the whole fetal head in US volumes.
The segmentation task is firstly formulated as an end-to-end volumetric mapping under an encoder-decoder deep architecture.
We then combine the segmentor with a proposed hybrid attention scheme (HAS) to select discriminative features and suppress the non-informative volumetric features.
arXiv Detail & Related papers (2020-04-28T14:43:05Z) - Bulbar ALS Detection Based on Analysis of Voice Perturbation and Vibrato [68.97335984455059]
The purpose of this work was to verify the sutability of the sustain vowel phonation test for automatic detection of patients with ALS.
We proposed enhanced procedure for separation of voice signal into fundamental periods that requires for calculation of measurements.
arXiv Detail & Related papers (2020-03-24T12:49:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.