Integrating Artificial Intelligence and Augmented Reality in Robotic
Surgery: An Initial dVRK Study Using a Surgical Education Scenario
- URL: http://arxiv.org/abs/2201.00383v1
- Date: Sun, 2 Jan 2022 17:34:10 GMT
- Title: Integrating Artificial Intelligence and Augmented Reality in Robotic
Surgery: An Initial dVRK Study Using a Surgical Education Scenario
- Authors: Yonghao Long, Jianfeng Cao, Anton Deguet, Russell H. Taylor, and Qi
Dou
- Abstract summary: We develop a novel robotic surgery education system by integrating artificial intelligence surgical module and augmented reality visualization.
The proposed system is evaluated through a preliminary experiment on surgical education task peg-transfer.
- Score: 15.863254207155835
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The demand of competent robot assisted surgeons is progressively expanding,
because robot-assisted surgery has become progressively more popular due to its
clinical advantages. To meet this demand and provide a better surgical
education for surgeon, we develop a novel robotic surgery education system by
integrating artificial intelligence surgical module and augmented reality
visualization. The artificial intelligence incorporates reinforcement leaning
to learn from expert demonstration and then generate 3D guidance trajectory,
providing surgical context awareness of the complete surgical procedure. The
trajectory information is further visualized in stereo viewer in the dVRK along
with other information such as text hint, where the user can perceive the 3D
guidance and learn the procedure. The proposed system is evaluated through a
preliminary experiment on surgical education task peg-transfer, which proves
its feasibility and potential as the next generation of robot-assisted surgery
education solution.
Related papers
- Creating a Digital Twin of Spinal Surgery: A Proof of Concept [68.37190859183663]
Surgery digitalization is the process of creating a virtual replica of real-world surgery.
We present a proof of concept (PoC) for surgery digitalization that is applied to an ex-vivo spinal surgery.
We employ five RGB-D cameras for dynamic 3D reconstruction of the surgeon, a high-end camera for 3D reconstruction of the anatomy, an infrared stereo camera for surgical instrument tracking, and a laser scanner for 3D reconstruction of the operating room and data fusion.
arXiv Detail & Related papers (2024-03-25T13:09:40Z) - Enhancing Surgical Performance in Cardiothoracic Surgery with
Innovations from Computer Vision and Artificial Intelligence: A Narrative
Review [12.241487673677517]
This narrative review synthesises work on technical and non-technical surgical skills, task performance, and pose estimation.
It illustrates new opportunities to advance cardiothoracic surgical performance with innovations from computer vision and artificial intelligence.
arXiv Detail & Related papers (2024-02-17T14:16:25Z) - Deep Multimodal Fusion for Surgical Feedback Classification [70.53297887843802]
We leverage a clinically-validated five-category classification of surgical feedback.
We then develop a multi-label machine learning model to classify these five categories of surgical feedback from inputs of text, audio, and video modalities.
The ultimate goal of our work is to help automate the annotation of real-time contextual surgical feedback at scale.
arXiv Detail & Related papers (2023-12-06T01:59:47Z) - Toward a Surgeon-in-the-Loop Ophthalmic Robotic Apprentice using Reinforcement and Imitation Learning [18.72371138886818]
We propose an image-guided approach for surgeon-centered autonomous agents during ophthalmic cataract surgery.
By integrating the surgeon's actions and preferences into the training process, our approach enables the robot to implicitly learn and adapt to the individual surgeon's unique techniques.
arXiv Detail & Related papers (2023-11-29T15:00:06Z) - SAMSNeRF: Segment Anything Model (SAM) Guides Dynamic Surgical Scene
Reconstruction by Neural Radiance Field (NeRF) [4.740415113160021]
We propose a novel approach called SAMSNeRF that combines Segment Anything Model (SAM) and Neural Radiance Field (NeRF) techniques.
Our experimental results on public endoscopy surgical videos demonstrate that our approach successfully reconstructs high-fidelity dynamic surgical scenes.
arXiv Detail & Related papers (2023-08-22T20:31:00Z) - Surgical tool classification and localization: results and methods from
the MICCAI 2022 SurgToolLoc challenge [69.91670788430162]
We present the results of the SurgLoc 2022 challenge.
The goal was to leverage tool presence data as weak labels for machine learning models trained to detect tools.
We conclude by discussing these results in the broader context of machine learning and surgical data science.
arXiv Detail & Related papers (2023-05-11T21:44:39Z) - Demonstration-Guided Reinforcement Learning with Efficient Exploration
for Task Automation of Surgical Robot [54.80144694888735]
We introduce Demonstration-guided EXploration (DEX), an efficient reinforcement learning algorithm.
Our method estimates expert-like behaviors with higher values to facilitate productive interactions.
Experiments on $10$ surgical manipulation tasks from SurRoL, a comprehensive surgical simulation platform, demonstrate significant improvements.
arXiv Detail & Related papers (2023-02-20T05:38:54Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - CholecTriplet2021: A benchmark challenge for surgical action triplet
recognition [66.51610049869393]
This paper presents CholecTriplet 2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos.
We present the challenge setup and assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge.
A total of 4 baseline methods and 19 new deep learning algorithms are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%.
arXiv Detail & Related papers (2022-04-10T18:51:55Z) - Robotic Surgery Remote Mentoring via AR with 3D Scene Streaming and Hand
Interaction [14.64569748299962]
We propose a novel AR-based robotic surgery remote mentoring system with efficient 3D scene visualization and natural 3D hand interaction.
Using a head-mounted display (i.e., HoloLens), the mentor can remotely monitor the procedure streamed from the trainee's operation side.
We validate the system on both real surgery stereo videos and ex-vivo scenarios of common robotic training tasks.
arXiv Detail & Related papers (2022-04-09T03:17:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.