Enhancing Surgical Robots with Embodied Intelligence for Autonomous Ultrasound Scanning
- URL: http://arxiv.org/abs/2405.00461v1
- Date: Wed, 1 May 2024 11:39:38 GMT
- Title: Enhancing Surgical Robots with Embodied Intelligence for Autonomous Ultrasound Scanning
- Authors: Huan Xu, Jinlin Wu, Guanglin Cao, Zhen Lei, Zhen Chen, Hongbin Liu,
- Abstract summary: Ultrasound robots are increasingly used in medical diagnostics and early disease screening.
Current ultrasound robots lack the intelligence to understand human intentions and instructions.
We propose a novel Ultrasound Embodied Intelligence system that equips ultrasound robots with the large language model and domain knowledge.
- Score: 24.014073238400137
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ultrasound robots are increasingly used in medical diagnostics and early disease screening. However, current ultrasound robots lack the intelligence to understand human intentions and instructions, hindering autonomous ultrasound scanning. To solve this problem, we propose a novel Ultrasound Embodied Intelligence system that equips ultrasound robots with the large language model (LLM) and domain knowledge, thereby improving the efficiency of ultrasound robots. Specifically, we first design an ultrasound operation knowledge database to add expertise in ultrasound scanning to the LLM, enabling the LLM to perform precise motion planning. Furthermore, we devise a dynamic ultrasound scanning strategy based on a \textit{think-observe-execute} prompt engineering, allowing LLMs to dynamically adjust motion planning strategies during the scanning procedures. Extensive experiments demonstrate that our system significantly improves ultrasound scan efficiency and quality from verbal commands. This advancement in autonomous medical scanning technology contributes to non-invasive diagnostics and streamlined medical workflows.
Related papers
- Transforming Surgical Interventions with Embodied Intelligence for Ultrasound Robotics [24.014073238400137]
This paper introduces a novel Ultrasound Embodied Intelligence system that combines ultrasound robots with large language models (LLMs) and domain-specific knowledge augmentation.
Our approach employs a dual strategy: firstly, integrating LLMs with ultrasound robots to interpret doctors' verbal instructions into precise motion planning.
Our findings suggest that the proposed system improves the efficiency and quality of ultrasound scans and paves the way for further advancements in autonomous medical scanning technologies.
arXiv Detail & Related papers (2024-06-18T14:22:16Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - AiAReSeg: Catheter Detection and Segmentation in Interventional
Ultrasound using Transformers [75.20925220246689]
endovascular surgeries are performed using the golden standard of Fluoroscopy, which uses ionising radiation to visualise catheters and vasculature.
This work proposes a solution using an adaptation of a state-of-the-art machine learning transformer architecture to detect and segment catheters in axial interventional Ultrasound image sequences.
arXiv Detail & Related papers (2023-09-25T19:34:12Z) - Learning Autonomous Ultrasound via Latent Task Representation and
Robotic Skills Adaptation [2.3830437836694185]
We propose the latent task representation and the robotic skills adaptation for autonomous ultrasound in this paper.
During the offline stage, the multimodal ultrasound skills are merged and encapsulated into a low-dimensional probability model.
During the online stage, the probability model will select and evaluate the optimal prediction.
arXiv Detail & Related papers (2023-07-25T08:32:36Z) - Towards a Simple Framework of Skill Transfer Learning for Robotic
Ultrasound-guidance Procedures [0.0]
We briefly review challenges in skill transfer learning for robotic ultrasound-guidance procedures.
We propose a simple framework of skill transfer learning for real-time applications in robotic ultrasound-guidance procedures.
arXiv Detail & Related papers (2023-05-06T10:37:13Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - Localizing Scan Targets from Human Pose for Autonomous Lung Ultrasound
Imaging [61.60067283680348]
With the advent of COVID-19 global pandemic, there is a need to fully automate ultrasound imaging.
We propose a vision-based, data driven method that incorporates learning-based computer vision techniques.
Our method attains an accuracy level of 15.52 (9.47) mm for probe positioning and 4.32 (3.69)deg for probe orientation, with a success rate above 80% under an error threshold of 25mm for all scan targets.
arXiv Detail & Related papers (2022-12-15T14:34:12Z) - Learning Ultrasound Scanning Skills from Human Demonstrations [6.971573270058377]
We propose a learning-based framework to acquire ultrasound scanning skills from human demonstrations.
The parameters of the model are learned using the data collected from skilled sonographers' demonstrations.
The robustness of the proposed framework is validated with the experiments on real data from sonographers.
arXiv Detail & Related papers (2021-11-09T12:29:25Z) - Learning Robotic Ultrasound Scanning Skills via Human Demonstrations and
Guided Explorations [12.894853456160924]
We propose a learning-based approach to learn the robotic ultrasound scanning skills from human demonstrations.
First, the robotic ultrasound scanning skill is encapsulated into a high-dimensional multi-modal model, which takes the ultrasound images, the pose/position of the probe and the contact force into account.
Second, we leverage the power of imitation learning to train the multi-modal model with the training data collected from the demonstrations of experienced ultrasound physicians.
arXiv Detail & Related papers (2021-11-02T14:38:09Z) - Voice-assisted Image Labelling for Endoscopic Ultrasound Classification
using Neural Networks [48.732863591145964]
We propose a multi-modal convolutional neural network architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure.
Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels.
arXiv Detail & Related papers (2021-10-12T21:22:24Z) - Deep Learning for Ultrasound Beamforming [120.12255978513912]
Beamforming, the process of mapping received ultrasound echoes to the spatial image domain, lies at the heart of the ultrasound image formation chain.
Modern ultrasound imaging leans heavily on innovations in powerful digital receive channel processing.
Deep learning methods can play a compelling role in the digital beamforming pipeline.
arXiv Detail & Related papers (2021-09-23T15:15:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.