AI system for fetal ultrasound in low-resource settings
- URL: http://arxiv.org/abs/2203.10139v1
- Date: Fri, 18 Mar 2022 19:39:34 GMT
- Title: AI system for fetal ultrasound in low-resource settings
- Authors: Ryan G. Gomes, Bellington Vwalika, Chace Lee, Angelica Willis, Marcin
Sieniek, Joan T. Price, Christina Chen, Margaret P. Kasaro, James A. Taylor,
Elizabeth M. Stringer, Scott Mayer McKinney, Ntazana Sindano, George E. Dahl,
William Goodnight III, Justin Gilmer, Benjamin H. Chi, Charles Lau, Terry
Spitz, T Saensuksopa, Kris Liu, Jonny Wong, Rory Pilgrim, Akib Uddin, Greg
Corrado, Lily Peng, Katherine Chou, Daniel Tse, Jeffrey S. A. Stringer,
Shravya Shetty
- Abstract summary: We developed and validated an artificial intelligence system that uses novice-acquired "blind sweep" ultrasound videos to estimate gestational age (GA) and fetal malpresentation.
Our AI models have the potential to assist in upleveling the capabilities of lightly trained ultrasound operators in low resource settings.
- Score: 6.601152168099057
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite considerable progress in maternal healthcare, maternal and perinatal
deaths remain high in low-to-middle income countries. Fetal ultrasound is an
important component of antenatal care, but shortage of adequately trained
healthcare workers has limited its adoption. We developed and validated an
artificial intelligence (AI) system that uses novice-acquired "blind sweep"
ultrasound videos to estimate gestational age (GA) and fetal malpresentation.
We further addressed obstacles that may be encountered in low-resourced
settings. Using a simplified sweep protocol with real-time AI feedback on sweep
quality, we have demonstrated the generalization of model performance to
minimally trained novice ultrasound operators using low cost ultrasound devices
with on-device AI integration. The GA model was non-inferior to standard fetal
biometry estimates with as few as two sweeps, and the fetal malpresentation
model had high AUC-ROCs across operators and devices. Our AI models have the
potential to assist in upleveling the capabilities of lightly trained
ultrasound operators in low resource settings.
Related papers
- Efficient Feature Extraction Using Light-Weight CNN Attention-Based Deep Learning Architectures for Ultrasound Fetal Plane Classification [3.998431476275487]
We propose a lightweight artificial intelligence architecture to classify the largest benchmark ultrasound dataset.
The approach fine-tunes from lightweight EfficientNet feature extraction backbones pre-trained on the ImageNet1k.
Our methodology incorporates the attention mechanism to refine features and 3-layer perceptrons for classification, achieving superior performance with the highest Top-1 accuracy of 96.25%, Top-2 accuracy of 99.80% and F1-Score of 0.9576.
arXiv Detail & Related papers (2024-10-22T20:02:38Z) - Using Explainable AI for EEG-based Reduced Montage Neonatal Seizure Detection [2.206534289238751]
The gold-standard for neonatal seizure detection currently relies on continuous video-EEG monitoring.
A novel explainable deep learning model to automate the neonatal seizure detection process with a reduced EEG montage is proposed.
The presented model achieves an absolute improvement of 8.31% and 42.86% in area under curve (AUC) and recall, respectively.
arXiv Detail & Related papers (2024-06-04T10:53:56Z) - Enhancing Surgical Robots with Embodied Intelligence for Autonomous Ultrasound Scanning [24.014073238400137]
Ultrasound robots are increasingly used in medical diagnostics and early disease screening.
Current ultrasound robots lack the intelligence to understand human intentions and instructions.
We propose a novel Ultrasound Embodied Intelligence system that equips ultrasound robots with the large language model and domain knowledge.
arXiv Detail & Related papers (2024-05-01T11:39:38Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - Training-free image style alignment for self-adapting domain shift on
handheld ultrasound devices [54.476120039032594]
We propose the Training-free Image Style Alignment (TISA) framework to align the style of handheld device data to those of standard devices.
TISA can directly infer handheld device images without extra training and is suited for clinical applications.
arXiv Detail & Related papers (2024-02-17T07:15:23Z) - Learning Autonomous Ultrasound via Latent Task Representation and
Robotic Skills Adaptation [2.3830437836694185]
We propose the latent task representation and the robotic skills adaptation for autonomous ultrasound in this paper.
During the offline stage, the multimodal ultrasound skills are merged and encapsulated into a low-dimensional probability model.
During the online stage, the probability model will select and evaluate the optimal prediction.
arXiv Detail & Related papers (2023-07-25T08:32:36Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - Localizing Scan Targets from Human Pose for Autonomous Lung Ultrasound
Imaging [61.60067283680348]
With the advent of COVID-19 global pandemic, there is a need to fully automate ultrasound imaging.
We propose a vision-based, data driven method that incorporates learning-based computer vision techniques.
Our method attains an accuracy level of 15.52 (9.47) mm for probe positioning and 4.32 (3.69)deg for probe orientation, with a success rate above 80% under an error threshold of 25mm for all scan targets.
arXiv Detail & Related papers (2022-12-15T14:34:12Z) - Robust and Efficient Medical Imaging with Self-Supervision [80.62711706785834]
We present REMEDIS, a unified representation learning strategy to improve robustness and data-efficiency of medical imaging AI.
We study a diverse range of medical imaging tasks and simulate three realistic application scenarios using retrospective data.
arXiv Detail & Related papers (2022-05-19T17:34:18Z) - Enabling faster and more reliable sonographic assessment of gestational
age through machine learning [1.3238745915345225]
Fetal ultrasounds are an essential part of prenatal care and can be used to estimate gestational age (GA)
We developed three AI models: an image model using standard plane images, a video model using fly-to videos, and an ensemble model (combining both image and video)
All three were statistically superior to standard fetal biometry-based GA estimates derived by expert sonographers.
arXiv Detail & Related papers (2022-03-22T17:15:56Z) - Deep Learning for Ultrasound Beamforming [120.12255978513912]
Beamforming, the process of mapping received ultrasound echoes to the spatial image domain, lies at the heart of the ultrasound image formation chain.
Modern ultrasound imaging leans heavily on innovations in powerful digital receive channel processing.
Deep learning methods can play a compelling role in the digital beamforming pipeline.
arXiv Detail & Related papers (2021-09-23T15:15:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.