Advancing Fetal Ultrasound Image Quality Assessment in Low-Resource Settings
- URL: http://arxiv.org/abs/2507.22802v1
- Date: Wed, 30 Jul 2025 16:09:29 GMT
- Title: Advancing Fetal Ultrasound Image Quality Assessment in Low-Resource Settings
- Authors: Dongli He, Hu Wang, Mohammad Yaqub,
- Abstract summary: We leverage FetalCLIP, a vision-caption model pretrained on a curated dataset of over 210,000 fetal ultrasound image-language pairs.<n>We introduce an IQA model adapted from FetalCLIP using Low-Rank Adaptation (LoRA), and evaluate it on the ACOUS-AI dataset.<n>We show that an adapted segmentation model, when repurposed for classification, further improves performance, achieving an F1 score of 0.771.
- Score: 3.982826074217475
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate fetal biometric measurements, such as abdominal circumference, play a vital role in prenatal care. However, obtaining high-quality ultrasound images for these measurements heavily depends on the expertise of sonographers, posing a significant challenge in low-income countries due to the scarcity of trained personnel. To address this issue, we leverage FetalCLIP, a vision-language model pretrained on a curated dataset of over 210,000 fetal ultrasound image-caption pairs, to perform automated fetal ultrasound image quality assessment (IQA) on blind-sweep ultrasound data. We introduce FetalCLIP$_{CLS}$, an IQA model adapted from FetalCLIP using Low-Rank Adaptation (LoRA), and evaluate it on the ACOUSLIC-AI dataset against six CNN and Transformer baselines. FetalCLIP$_{CLS}$ achieves the highest F1 score of 0.757. Moreover, we show that an adapted segmentation model, when repurposed for classification, further improves performance, achieving an F1 score of 0.771. Our work demonstrates how parameter-efficient fine-tuning of fetal ultrasound foundation models can enable task-specific adaptations, advancing prenatal care in resource-limited settings. The experimental code is available at: https://github.com/donglihe-hub/FetalCLIP-IQA.
Related papers
- The Efficacy of Semantics-Preserving Transformations in Self-Supervised Learning for Medical Ultrasound [60.80780313225093]
This study systematically investigated the impact of data augmentation and preprocessing strategies in self-supervised learning for lung ultrasound.<n>Three data augmentation pipelines were assessed: a baseline pipeline commonly used across imaging domains, a novel semantic-preserving pipeline designed for ultrasound, and a distilled set of the most effective transformations from both pipelines.
arXiv Detail & Related papers (2025-04-10T16:26:47Z) - FetalCLIP: A Visual-Language Foundation Model for Fetal Ultrasound Image Analysis [0.676810348604193]
FetalCLIP is a vision-language foundation model capable of generating universal representation of fetal ultrasound images.<n>It was pre-trained using a multimodal learning approach on a diverse dataset of 210,035 fetal ultrasound images paired with text.
arXiv Detail & Related papers (2025-02-20T18:30:34Z) - Privacy-Preserving Federated Foundation Model for Generalist Ultrasound Artificial Intelligence [83.02106623401885]
We present UltraFedFM, an innovative privacy-preserving ultrasound foundation model.
UltraFedFM is collaboratively pre-trained using federated learning across 16 distributed medical institutions in 9 countries.
It achieves an average area under the receiver operating characteristic curve of 0.927 for disease diagnosis and a dice similarity coefficient of 0.878 for lesion segmentation.
arXiv Detail & Related papers (2024-11-25T13:40:11Z) - Efficient Feature Extraction Using Light-Weight CNN Attention-Based Deep Learning Architectures for Ultrasound Fetal Plane Classification [3.998431476275487]
We propose a lightweight artificial intelligence architecture to classify the largest benchmark ultrasound dataset.
The approach fine-tunes from lightweight EfficientNet feature extraction backbones pre-trained on the ImageNet1k.
Our methodology incorporates the attention mechanism to refine features and 3-layer perceptrons for classification, achieving superior performance with the highest Top-1 accuracy of 96.25%, Top-2 accuracy of 99.80% and F1-Score of 0.9576.
arXiv Detail & Related papers (2024-10-22T20:02:38Z) - Multi-Task Learning Approach for Unified Biometric Estimation from Fetal
Ultrasound Anomaly Scans [0.8213829427624407]
We propose a multi-task learning approach to classify the region into head, abdomen and femur.
We were able to achieve a mean absolute error (MAE) of 1.08 mm on head circumference, 1.44 mm on abdomen circumference and 1.10 mm on femur length with a classification accuracy of 99.91%.
arXiv Detail & Related papers (2023-11-16T06:35:02Z) - A Federated Learning Framework for Stenosis Detection [70.27581181445329]
This study explores the use of Federated Learning (FL) for stenosis detection in coronary angiography images (CA)
Two heterogeneous datasets from two institutions were considered: dataset 1 includes 1219 images from 200 patients, which we acquired at the Ospedale Riuniti of Ancona (Italy)
dataset 2 includes 7492 sequential images from 90 patients from a previous study available in the literature.
arXiv Detail & Related papers (2023-10-30T11:13:40Z) - Towards Realistic Ultrasound Fetal Brain Imaging Synthesis [0.7315240103690552]
There are few public ultrasound fetal imaging datasets due to insufficient amounts of clinical data, patient privacy, rare occurrence of abnormalities in general practice, and limited experts for data collection and validation.
To address such data scarcity, we proposed generative adversarial networks (GAN)-based models, diffusion-super-resolution-GAN and transformer-based-GAN, to synthesise images of fetal ultrasound brain planes from one public dataset.
arXiv Detail & Related papers (2023-04-08T07:07:20Z) - Localizing Scan Targets from Human Pose for Autonomous Lung Ultrasound
Imaging [61.60067283680348]
With the advent of COVID-19 global pandemic, there is a need to fully automate ultrasound imaging.
We propose a vision-based, data driven method that incorporates learning-based computer vision techniques.
Our method attains an accuracy level of 15.52 (9.47) mm for probe positioning and 4.32 (3.69)deg for probe orientation, with a success rate above 80% under an error threshold of 25mm for all scan targets.
arXiv Detail & Related papers (2022-12-15T14:34:12Z) - Self-supervised contrastive learning of echocardiogram videos enables
label-efficient cardiac disease diagnosis [48.64462717254158]
We developed a self-supervised contrastive learning approach, EchoCLR, to catered to echocardiogram videos.
When fine-tuned on small portions of labeled data, EchoCLR pretraining significantly improved classification performance for left ventricular hypertrophy (LVH) and aortic stenosis (AS)
EchoCLR is unique in its ability to learn representations of medical videos and demonstrates that SSL can enable label-efficient disease classification from small, labeled datasets.
arXiv Detail & Related papers (2022-07-23T19:17:26Z) - Enabling faster and more reliable sonographic assessment of gestational
age through machine learning [1.3238745915345225]
Fetal ultrasounds are an essential part of prenatal care and can be used to estimate gestational age (GA)
We developed three AI models: an image model using standard plane images, a video model using fly-to videos, and an ensemble model (combining both image and video)
All three were statistically superior to standard fetal biometry-based GA estimates derived by expert sonographers.
arXiv Detail & Related papers (2022-03-22T17:15:56Z) - Hybrid Attention for Automatic Segmentation of Whole Fetal Head in
Prenatal Ultrasound Volumes [52.53375964591765]
We propose the first fully-automated solution to segment the whole fetal head in US volumes.
The segmentation task is firstly formulated as an end-to-end volumetric mapping under an encoder-decoder deep architecture.
We then combine the segmentor with a proposed hybrid attention scheme (HAS) to select discriminative features and suppress the non-informative volumetric features.
arXiv Detail & Related papers (2020-04-28T14:43:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.