Whole-examination AI estimation of fetal biometrics from 20-week
ultrasound scans
- URL: http://arxiv.org/abs/2401.01201v1
- Date: Tue, 2 Jan 2024 13:04:41 GMT
- Title: Whole-examination AI estimation of fetal biometrics from 20-week
ultrasound scans
- Authors: Lorenzo Venturini, Samuel Budd, Alfonso Farruggia, Robert Wright,
Jacqueline Matthew, Thomas G. Day, Bernhard Kainz, Reza Razavi, Jo V. Hajnal
- Abstract summary: We introduce a paradigm shift that attains human-level performance in biometric measurement.
We use a convolutional neural network to classify each frame of an ultrasound video recording.
We measure fetal biometrics in every frame where appropriate anatomy is visible.
- Score: 6.1751261266833986
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The current approach to fetal anomaly screening is based on biometric
measurements derived from individually selected ultrasound images. In this
paper, we introduce a paradigm shift that attains human-level performance in
biometric measurement by aggregating automatically extracted biometrics from
every frame across an entire scan, with no need for operator intervention. We
use a convolutional neural network to classify each frame of an ultrasound
video recording. We then measure fetal biometrics in every frame where
appropriate anatomy is visible. We use a Bayesian method to estimate the true
value of each biometric from a large number of measurements and
probabilistically reject outliers. We performed a retrospective experiment on
1457 recordings (comprising 48 million frames) of 20-week ultrasound scans,
estimated fetal biometrics in those scans and compared our estimates to the
measurements sonographers took during the scan. Our method achieves human-level
performance in estimating fetal biometrics and estimates well-calibrated
credible intervals in which the true biometric value is expected to lie.
Related papers
- Multi-Task Learning Approach for Unified Biometric Estimation from Fetal
Ultrasound Anomaly Scans [0.8213829427624407]
We propose a multi-task learning approach to classify the region into head, abdomen and femur.
We were able to achieve a mean absolute error (MAE) of 1.08 mm on head circumference, 1.44 mm on abdomen circumference and 1.10 mm on femur length with a classification accuracy of 99.91%.
arXiv Detail & Related papers (2023-11-16T06:35:02Z) - Towards Unifying Anatomy Segmentation: Automated Generation of a
Full-body CT Dataset via Knowledge Aggregation and Anatomical Guidelines [113.08940153125616]
We generate a dataset of whole-body CT scans with $142$ voxel-level labels for 533 volumes providing comprehensive anatomical coverage.
Our proposed procedure does not rely on manual annotation during the label aggregation stage.
We release our trained unified anatomical segmentation model capable of predicting $142$ anatomical structures on CT data.
arXiv Detail & Related papers (2023-07-25T09:48:13Z) - Localizing Scan Targets from Human Pose for Autonomous Lung Ultrasound
Imaging [61.60067283680348]
With the advent of COVID-19 global pandemic, there is a need to fully automate ultrasound imaging.
We propose a vision-based, data driven method that incorporates learning-based computer vision techniques.
Our method attains an accuracy level of 15.52 (9.47) mm for probe positioning and 4.32 (3.69)deg for probe orientation, with a success rate above 80% under an error threshold of 25mm for all scan targets.
arXiv Detail & Related papers (2022-12-15T14:34:12Z) - Facial Soft Biometrics for Recognition in the Wild: Recent Works,
Annotation, and COTS Evaluation [63.05890836038913]
We study the role of soft biometrics to enhance person recognition systems in unconstrained scenarios.
We consider two assumptions: 1) manual estimation of soft biometrics and 2) automatic estimation from two commercial off-the-shelf systems.
Experiments are carried out fusing soft biometrics with two state-of-the-art face recognition systems based on deep learning.
arXiv Detail & Related papers (2022-10-24T11:29:57Z) - BiometryNet: Landmark-based Fetal Biometry Estimation from Standard
Ultrasound Planes [9.919499846996269]
This paper describes BiometryNet, an end-to-end landmark regression framework for fetal biometry estimation.
It includes a novel Dynamic Orientation Determination (DOD) method for enforcing measurement-specific orientation consistency during network training.
To validate our method, we assembled a dataset of 3,398 ultrasound images from 1,829 subjects acquired in three clinical sites with seven different ultrasound devices.
arXiv Detail & Related papers (2022-06-29T14:32:32Z) - Deep Learning Fetal Ultrasound Video Model Match Human Observers in
Biometric Measurements [8.468600443532413]
This work investigates the use of deep convolutional neural networks (CNN) to automatically perform measurements of fetal body parts.
The observed differences in measurement values were within the range inter- and intra-observer variability.
We argue that FUVAI has the potential to assist sonographers who perform fetal biometric measurements in clinical settings.
arXiv Detail & Related papers (2022-05-27T09:00:19Z) - FetalNet: Multi-task deep learning framework for fetal ultrasound
biometric measurements [11.364211664829567]
We propose an end-to-end multi-task neural network called FetalNet with an attention mechanism and stacked module for fetal ultrasound scan video analysis.
The main goal in fetal ultrasound video analysis is to find proper standard planes to measure the fetal head, abdomen and femur.
Our method called FetalNet outperforms existing state-of-the-art methods in both classification and segmentation in fetal ultrasound video recordings.
arXiv Detail & Related papers (2021-07-14T19:13:33Z) - AutoFB: Automating Fetal Biometry Estimation from Standard Ultrasound
Planes [10.745788530692305]
The proposed framework semantically segments the key fetal anatomies using state-of-the-art segmentation models.
We show that the network with the best segmentation performance tends to be more accurate for biometry estimation.
arXiv Detail & Related papers (2021-07-12T08:42:31Z) - Assisted Probe Positioning for Ultrasound Guided Radiotherapy Using
Image Sequence Classification [55.96221340756895]
Effective transperineal ultrasound image guidance in prostate external beam radiotherapy requires consistent alignment between probe and prostate at each session during patient set-up.
We demonstrate a method for ensuring accurate probe placement through joint classification of images and probe position data.
Using a multi-input multi-task algorithm, spatial coordinate data from an optically tracked ultrasound probe is combined with an image clas-sifier using a recurrent neural network to generate two sets of predictions in real-time.
The algorithm identified optimal probe alignment within a mean (standard deviation) range of 3.7$circ$ (1.2$circ$) from
arXiv Detail & Related papers (2020-10-06T13:55:02Z) - Appearance Learning for Image-based Motion Estimation in Tomography [60.980769164955454]
In tomographic imaging, anatomical structures are reconstructed by applying a pseudo-inverse forward model to acquired signals.
Patient motion corrupts the geometry alignment in the reconstruction process resulting in motion artifacts.
We propose an appearance learning approach recognizing the structures of rigid motion independently from the scanned object.
arXiv Detail & Related papers (2020-06-18T09:49:11Z) - Hybrid Attention for Automatic Segmentation of Whole Fetal Head in
Prenatal Ultrasound Volumes [52.53375964591765]
We propose the first fully-automated solution to segment the whole fetal head in US volumes.
The segmentation task is firstly formulated as an end-to-end volumetric mapping under an encoder-decoder deep architecture.
We then combine the segmentor with a proposed hybrid attention scheme (HAS) to select discriminative features and suppress the non-informative volumetric features.
arXiv Detail & Related papers (2020-04-28T14:43:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.