Fusing Radiomic Features with Deep Representations for Gestational Age Estimation in Fetal Ultrasound Images
- URL: http://arxiv.org/abs/2506.20407v2
- Date: Fri, 27 Jun 2025 21:41:48 GMT
- Title: Fusing Radiomic Features with Deep Representations for Gestational Age Estimation in Fetal Ultrasound Images
- Authors: Fangyijie Wang, Yuan Liang, Sourav Bhattacharjee, Abey Campbell, Kathleen M. Curran, Guénolé Silvestre,
- Abstract summary: We present a novel feature fusion framework to estimate gestational age (GA) using fetal ultrasound images without any measurement information.<n>Our framework estimates GA with a mean absolute error of 8.0 days across three trimesters, outperforming current machine learning-based methods at these gestational ages.
- Score: 5.626701242497243
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Accurate gestational age (GA) estimation, ideally through fetal ultrasound measurement, is a crucial aspect of providing excellent antenatal care. However, deriving GA from manual fetal biometric measurements depends on the operator and is time-consuming. Hence, automatic computer-assisted methods are demanded in clinical practice. In this paper, we present a novel feature fusion framework to estimate GA using fetal ultrasound images without any measurement information. We adopt a deep learning model to extract deep representations from ultrasound images. We extract radiomic features to reveal patterns and characteristics of fetal brain growth. To harness the interpretability of radiomics in medical imaging analysis, we estimate GA by fusing radiomic features and deep representations. Our framework estimates GA with a mean absolute error of 8.0 days across three trimesters, outperforming current machine learning-based methods at these gestational ages. Experimental results demonstrate the robustness of our framework across different populations in diverse geographical regions. Our code is publicly available on \href{https://github.com/13204942/RadiomicsImageFusion_FetalUS}.
Related papers
- Ultrasound Lung Aeration Map via Physics-Aware Neural Operators [78.6077820217471]
Lung ultrasound is a growing modality in clinics for diagnosing acute and chronic lung diseases.<n>It is complicated by complex reverberations from the pleural interface caused by the inability of ultrasound to penetrate air.<n>We propose LUNA, an AI model that directly reconstructs lung aeration maps from RF data.
arXiv Detail & Related papers (2025-01-02T09:24:34Z) - Multi-Center Study on Deep Learning-Assisted Detection and Classification of Fetal Central Nervous System Anomalies Using Ultrasound Imaging [11.261565838608488]
Prenatal ultrasound evaluates fetal growth and detects congenital abnormalities during pregnancy.<n>We construct a deep learning model to improve the overall accuracy of the diagnosis of fetal cranial anomalies.
arXiv Detail & Related papers (2025-01-01T07:56:26Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - Breast Ultrasound Report Generation using LangChain [58.07183284468881]
We propose the integration of multiple image analysis tools through a LangChain using Large Language Models (LLM) into the breast reporting process.
Our method can accurately extract relevant features from ultrasound images, interpret them in a clinical context, and produce comprehensive and standardized reports.
arXiv Detail & Related papers (2023-12-05T00:28:26Z) - AiAReSeg: Catheter Detection and Segmentation in Interventional
Ultrasound using Transformers [75.20925220246689]
endovascular surgeries are performed using the golden standard of Fluoroscopy, which uses ionising radiation to visualise catheters and vasculature.
This work proposes a solution using an adaptation of a state-of-the-art machine learning transformer architecture to detect and segment catheters in axial interventional Ultrasound image sequences.
arXiv Detail & Related papers (2023-09-25T19:34:12Z) - Deep Learning Fetal Ultrasound Video Model Match Human Observers in
Biometric Measurements [8.468600443532413]
This work investigates the use of deep convolutional neural networks (CNN) to automatically perform measurements of fetal body parts.
The observed differences in measurement values were within the range inter- and intra-observer variability.
We argue that FUVAI has the potential to assist sonographers who perform fetal biometric measurements in clinical settings.
arXiv Detail & Related papers (2022-05-27T09:00:19Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - Enabling faster and more reliable sonographic assessment of gestational
age through machine learning [1.3238745915345225]
Fetal ultrasounds are an essential part of prenatal care and can be used to estimate gestational age (GA)
We developed three AI models: an image model using standard plane images, a video model using fly-to videos, and an ensemble model (combining both image and video)
All three were statistically superior to standard fetal biometry-based GA estimates derived by expert sonographers.
arXiv Detail & Related papers (2022-03-22T17:15:56Z) - FetalNet: Multi-task deep learning framework for fetal ultrasound
biometric measurements [11.364211664829567]
We propose an end-to-end multi-task neural network called FetalNet with an attention mechanism and stacked module for fetal ultrasound scan video analysis.
The main goal in fetal ultrasound video analysis is to find proper standard planes to measure the fetal head, abdomen and femur.
Our method called FetalNet outperforms existing state-of-the-art methods in both classification and segmentation in fetal ultrasound video recordings.
arXiv Detail & Related papers (2021-07-14T19:13:33Z) - AutoFB: Automating Fetal Biometry Estimation from Standard Ultrasound
Planes [10.745788530692305]
The proposed framework semantically segments the key fetal anatomies using state-of-the-art segmentation models.
We show that the network with the best segmentation performance tends to be more accurate for biometry estimation.
arXiv Detail & Related papers (2021-07-12T08:42:31Z) - Deep learning in the ultrasound evaluation of neonatal respiratory
status [11.308283140003676]
Lung ultrasound imaging is reaching growing interest from the scientific community.
Image analysis and pattern recognition approaches have proven their ability to fully exploit the rich information contained in these data.
We present a thorough analysis of recent deep learning networks and training strategies carried out on a vast and challenging multicenter dataset.
arXiv Detail & Related papers (2020-10-31T18:57:55Z) - Hybrid Attention for Automatic Segmentation of Whole Fetal Head in
Prenatal Ultrasound Volumes [52.53375964591765]
We propose the first fully-automated solution to segment the whole fetal head in US volumes.
The segmentation task is firstly formulated as an end-to-end volumetric mapping under an encoder-decoder deep architecture.
We then combine the segmentor with a proposed hybrid attention scheme (HAS) to select discriminative features and suppress the non-informative volumetric features.
arXiv Detail & Related papers (2020-04-28T14:43:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.