Automatic Probe Movement Guidance for Freehand Obstetric Ultrasound
- URL: http://arxiv.org/abs/2007.04480v1
- Date: Wed, 8 Jul 2020 23:58:41 GMT
- Title: Automatic Probe Movement Guidance for Freehand Obstetric Ultrasound
- Authors: Richard Droste, Lior Drukker, Aris T. Papageorghiou, J. Alison Noble
- Abstract summary: The system employs an artificial neural network that receives the ultrasound video signal and the motion signal of an inertial measurement unit (IMU) that is attached to the probe.
The network termed US-GuideNet predicts either the movement towards the standard plane position (goal prediction), or the next movement that an expert sonographer would perform.
- Score: 4.893896929103368
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present the first system that provides real-time probe movement guidance
for acquiring standard planes in routine freehand obstetric ultrasound
scanning. Such a system can contribute to the worldwide deployment of obstetric
ultrasound scanning by lowering the required level of operator expertise. The
system employs an artificial neural network that receives the ultrasound video
signal and the motion signal of an inertial measurement unit (IMU) that is
attached to the probe, and predicts a guidance signal. The network termed
US-GuideNet predicts either the movement towards the standard plane position
(goal prediction), or the next movement that an expert sonographer would
perform (action prediction). While existing models for other ultrasound
applications are trained with simulations or phantoms, we train our model with
real-world ultrasound video and probe motion data from 464 routine clinical
scans by 17 accredited sonographers. Evaluations for 3 standard plane types
show that the model provides a useful guidance signal with an accuracy of 88.8%
for goal prediction and 90.9% for action prediction.
Related papers
- Enhancing Surgical Robots with Embodied Intelligence for Autonomous Ultrasound Scanning [24.014073238400137]
Ultrasound robots are increasingly used in medical diagnostics and early disease screening.
Current ultrasound robots lack the intelligence to understand human intentions and instructions.
We propose a novel Ultrasound Embodied Intelligence system that equips ultrasound robots with the large language model and domain knowledge.
arXiv Detail & Related papers (2024-05-01T11:39:38Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - Training-free image style alignment for self-adapting domain shift on
handheld ultrasound devices [54.476120039032594]
We propose the Training-free Image Style Alignment (TISA) framework to align the style of handheld device data to those of standard devices.
TISA can directly infer handheld device images without extra training and is suited for clinical applications.
arXiv Detail & Related papers (2024-02-17T07:15:23Z) - AiAReSeg: Catheter Detection and Segmentation in Interventional
Ultrasound using Transformers [75.20925220246689]
endovascular surgeries are performed using the golden standard of Fluoroscopy, which uses ionising radiation to visualise catheters and vasculature.
This work proposes a solution using an adaptation of a state-of-the-art machine learning transformer architecture to detect and segment catheters in axial interventional Ultrasound image sequences.
arXiv Detail & Related papers (2023-09-25T19:34:12Z) - Identifying Visible Tissue in Intraoperative Ultrasound Images during
Brain Surgery: A Method and Application [1.4408275800058263]
Intraoperative ultrasound scanning is a demanding visuotactile task.
It requires operators to simultaneously localise the ultrasound perspective and manually perform slight adjustments to the pose of the probe.
We propose a method for the identification of the visible tissue, which enables the analysis of ultrasound probe and tissue contact.
arXiv Detail & Related papers (2023-06-01T23:06:14Z) - Localizing Scan Targets from Human Pose for Autonomous Lung Ultrasound
Imaging [61.60067283680348]
With the advent of COVID-19 global pandemic, there is a need to fully automate ultrasound imaging.
We propose a vision-based, data driven method that incorporates learning-based computer vision techniques.
Our method attains an accuracy level of 15.52 (9.47) mm for probe positioning and 4.32 (3.69)deg for probe orientation, with a success rate above 80% under an error threshold of 25mm for all scan targets.
arXiv Detail & Related papers (2022-12-15T14:34:12Z) - Learning Ultrasound Scanning Skills from Human Demonstrations [6.971573270058377]
We propose a learning-based framework to acquire ultrasound scanning skills from human demonstrations.
The parameters of the model are learned using the data collected from skilled sonographers' demonstrations.
The robustness of the proposed framework is validated with the experiments on real data from sonographers.
arXiv Detail & Related papers (2021-11-09T12:29:25Z) - Learning Robotic Ultrasound Scanning Skills via Human Demonstrations and
Guided Explorations [12.894853456160924]
We propose a learning-based approach to learn the robotic ultrasound scanning skills from human demonstrations.
First, the robotic ultrasound scanning skill is encapsulated into a high-dimensional multi-modal model, which takes the ultrasound images, the pose/position of the probe and the contact force into account.
Second, we leverage the power of imitation learning to train the multi-modal model with the training data collected from the demonstrations of experienced ultrasound physicians.
arXiv Detail & Related papers (2021-11-02T14:38:09Z) - Deep Learning for Ultrasound Beamforming [120.12255978513912]
Beamforming, the process of mapping received ultrasound echoes to the spatial image domain, lies at the heart of the ultrasound image formation chain.
Modern ultrasound imaging leans heavily on innovations in powerful digital receive channel processing.
Deep learning methods can play a compelling role in the digital beamforming pipeline.
arXiv Detail & Related papers (2021-09-23T15:15:21Z) - Assisted Probe Positioning for Ultrasound Guided Radiotherapy Using
Image Sequence Classification [55.96221340756895]
Effective transperineal ultrasound image guidance in prostate external beam radiotherapy requires consistent alignment between probe and prostate at each session during patient set-up.
We demonstrate a method for ensuring accurate probe placement through joint classification of images and probe position data.
Using a multi-input multi-task algorithm, spatial coordinate data from an optically tracked ultrasound probe is combined with an image clas-sifier using a recurrent neural network to generate two sets of predictions in real-time.
The algorithm identified optimal probe alignment within a mean (standard deviation) range of 3.7$circ$ (1.2$circ$) from
arXiv Detail & Related papers (2020-10-06T13:55:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.