Image-Guided Navigation of a Robotic Ultrasound Probe for Autonomous
Spinal Sonography Using a Shadow-aware Dual-Agent Framework
- URL: http://arxiv.org/abs/2111.02167v1
- Date: Wed, 3 Nov 2021 12:11:27 GMT
- Title: Image-Guided Navigation of a Robotic Ultrasound Probe for Autonomous
Spinal Sonography Using a Shadow-aware Dual-Agent Framework
- Authors: Keyu Li, Yangxin Xu, Jian Wang, Dong Ni, Li Liu, Max Q.-H. Meng
- Abstract summary: We propose a novel dual-agent framework that integrates a reinforcement learning agent and a deep learning agent.
Our method can effectively interpret the US images and navigate the probe to acquire multiple standard views of the spine.
- Score: 35.17207004351791
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ultrasound (US) imaging is commonly used to assist in the diagnosis and
interventions of spine diseases, while the standardized US acquisitions
performed by manually operating the probe require substantial experience and
training of sonographers. In this work, we propose a novel dual-agent framework
that integrates a reinforcement learning (RL) agent and a deep learning (DL)
agent to jointly determine the movement of the US probe based on the real-time
US images, in order to mimic the decision-making process of an expert
sonographer to achieve autonomous standard view acquisitions in spinal
sonography. Moreover, inspired by the nature of US propagation and the
characteristics of the spinal anatomy, we introduce a view-specific acoustic
shadow reward to utilize the shadow information to implicitly guide the
navigation of the probe toward different standard views of the spine. Our
method is validated in both quantitative and qualitative experiments in a
simulation environment built with US data acquired from $17$ volunteers. The
average navigation accuracy toward different standard views achieves
$5.18mm/5.25^\circ$ and $12.87mm/17.49^\circ$ in the intra- and inter-subject
settings, respectively. The results demonstrate that our method can effectively
interpret the US images and navigate the probe to acquire multiple standard
views of the spine.
Related papers
- AiAReSeg: Catheter Detection and Segmentation in Interventional
Ultrasound using Transformers [75.20925220246689]
endovascular surgeries are performed using the golden standard of Fluoroscopy, which uses ionising radiation to visualise catheters and vasculature.
This work proposes a solution using an adaptation of a state-of-the-art machine learning transformer architecture to detect and segment catheters in axial interventional Ultrasound image sequences.
arXiv Detail & Related papers (2023-09-25T19:34:12Z) - Intelligent Robotic Sonographer: Mutual Information-based Disentangled
Reward Learning from Few Demonstrations [42.731081399649916]
This work proposes an intelligent robotic sonographer to autonomously "explore" target anatomies and navigate a US probe to a relevant 2D plane by learning from the expert.
The underlying high-level physiological knowledge from experts is inferred by a neural reward function.
The proposed advanced framework can robustly work on a variety of seen and unseen phantoms as well as in-vivo human carotid data.
arXiv Detail & Related papers (2023-07-07T16:30:50Z) - Towards Autonomous Atlas-based Ultrasound Acquisitions in Presence of
Articulated Motion [48.52403516006036]
This paper proposes a vision-based approach allowing autonomous robotic US limb scanning.
To this end, an atlas MRI template of a human arm with annotated vascular structures is used to generate trajectories.
In all cases, the system can successfully acquire the planned vascular structure on volunteers' limbs.
arXiv Detail & Related papers (2022-08-10T15:39:20Z) - OADAT: Experimental and Synthetic Clinical Optoacoustic Data for
Standardized Image Processing [62.993663757843464]
Optoacoustic (OA) imaging is based on excitation of biological tissues with nanosecond-duration laser pulses followed by detection of ultrasound waves generated via light-absorption-mediated thermoelastic expansion.
OA imaging features a powerful combination between rich optical contrast and high resolution in deep tissues.
No standardized datasets generated with different types of experimental set-up and associated processing methods are available to facilitate advances in broader applications of OA in clinical settings.
arXiv Detail & Related papers (2022-06-17T08:11:26Z) - VesNet-RL: Simulation-based Reinforcement Learning for Real-World US
Probe Navigation [39.7566010845081]
In freehand US examinations, sonographers often navigate a US probe to visualize standard examination planes with rich diagnostic information.
We propose a simulation-based RL framework for real-world navigation of US probes towards the standard longitudinal views of vessels.
arXiv Detail & Related papers (2022-05-10T09:34:42Z) - Voice-assisted Image Labelling for Endoscopic Ultrasound Classification
using Neural Networks [48.732863591145964]
We propose a multi-modal convolutional neural network architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure.
Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels.
arXiv Detail & Related papers (2021-10-12T21:22:24Z) - Follow the Curve: Robotic-Ultrasound Navigation with Learning Based
Localization of Spinous Processes for Scoliosis Assessment [1.7594269512136405]
This paper introduces a robotic-ultrasound approach for spinal curvature tracking and automatic navigation.
A fully connected network with deconvolutional heads is developed to locate the spinous process efficiently with real-time ultrasound images.
We developed a new force-driven controller that automatically adjusts the probe's pose relative to the skin surface to ensure a good acoustic coupling between the probe and skin.
arXiv Detail & Related papers (2021-09-11T06:25:30Z) - Semantic segmentation of multispectral photoacoustic images using deep
learning [53.65837038435433]
Photoacoustic imaging has the potential to revolutionise healthcare.
Clinical translation of the technology requires conversion of the high-dimensional acquired data into clinically relevant and interpretable information.
We present a deep learning-based approach to semantic segmentation of multispectral photoacoustic images.
arXiv Detail & Related papers (2021-05-20T09:33:55Z) - Autonomous Navigation of an Ultrasound Probe Towards Standard Scan
Planes with Deep Reinforcement Learning [28.17246919349759]
We propose a framework to autonomously control the 6-D pose of a virtual US probe based on real-time image feedback.
We validate our method in a simulation environment built with real-world data collected in the US imaging of the spine.
arXiv Detail & Related papers (2021-03-01T03:09:17Z) - Screen Tracking for Clinical Translation of Live Ultrasound Image
Analysis Methods [2.5805793749729857]
The proposed method captures the US image by tracking the screen with a camera fixed at the sonographer's view point and reformats the captured image to the right aspect ratio.
It is hypothesized that this would enable to input such retrieved image into an image processing pipeline to extract information that can help improve the examination.
This information could eventually be projected back to the sonographer's field of view in real time using, for example, an augmented reality (AR) headset.
arXiv Detail & Related papers (2020-07-13T09:53:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.