Autonomous Soft Tissue Retraction Using Demonstration-Guided
Reinforcement Learning
- URL: http://arxiv.org/abs/2309.00837v1
- Date: Sat, 2 Sep 2023 06:13:58 GMT
- Title: Autonomous Soft Tissue Retraction Using Demonstration-Guided
Reinforcement Learning
- Authors: Amritpal Singh, Wenqi Shi, May D Wang
- Abstract summary: Existing surgical task learning mainly pertains to rigid body interactions.
The advancement towards more sophisticated surgical robots necessitates the manipulation of soft bodies.
This work lays the foundation for future research into the development and refinement of surgical robots capable of managing both rigid and soft tissue interactions.
- Score: 6.80186731352488
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the context of surgery, robots can provide substantial assistance by
performing small, repetitive tasks such as suturing, needle exchange, and
tissue retraction, thereby enabling surgeons to concentrate on more complex
aspects of the procedure. However, existing surgical task learning mainly
pertains to rigid body interactions, whereas the advancement towards more
sophisticated surgical robots necessitates the manipulation of soft bodies.
Previous work focused on tissue phantoms for soft tissue task learning, which
can be expensive and can be an entry barrier to research. Simulation
environments present a safe and efficient way to learn surgical tasks before
their application to actual tissue. In this study, we create a Robot Operating
System (ROS)-compatible physics simulation environment with support for both
rigid and soft body interactions within surgical tasks. Furthermore, we
investigate the soft tissue interactions facilitated by the patient-side
manipulator of the DaVinci surgical robot. Leveraging the pybullet physics
engine, we simulate kinematics and establish anchor points to guide the robotic
arm when manipulating soft tissue. Using demonstration-guided reinforcement
learning (RL) algorithms, we investigate their performance in comparison to
traditional reinforcement learning algorithms. Our in silico trials demonstrate
a proof-of-concept for autonomous surgical soft tissue retraction. The results
corroborate the feasibility of learning soft body manipulation through the
application of reinforcement learning agents. This work lays the foundation for
future research into the development and refinement of surgical robots capable
of managing both rigid and soft tissue interactions. Code is available at
https://github.com/amritpal-001/tissue_retract.
Related papers
- DiffuseBot: Breeding Soft Robots With Physics-Augmented Generative
Diffusion Models [102.13968267347553]
We present DiffuseBot, a physics-augmented diffusion model that generates soft robot morphologies capable of excelling in a wide spectrum of tasks.
We showcase a range of simulated and fabricated robots along with their capabilities.
arXiv Detail & Related papers (2023-11-28T18:58:48Z) - Surgical Gym: A high-performance GPU-based platform for reinforcement
learning with surgical robots [1.6415802723978308]
We introduce Surgical Gym, an open-source high performance platform for surgical robot learning.
We demonstrate between 100-5000x faster training times compared with previous surgical learning platforms.
arXiv Detail & Related papers (2023-10-07T03:21:58Z) - Surgical tool classification and localization: results and methods from
the MICCAI 2022 SurgToolLoc challenge [69.91670788430162]
We present the results of the SurgLoc 2022 challenge.
The goal was to leverage tool presence data as weak labels for machine learning models trained to detect tools.
We conclude by discussing these results in the broader context of machine learning and surgical data science.
arXiv Detail & Related papers (2023-05-11T21:44:39Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - SurRoL: An Open-source Reinforcement Learning Centered and dVRK
Compatible Platform for Surgical Robot Learning [78.76052604441519]
SurRoL is an RL-centered simulation platform for surgical robot learning compatible with the da Vinci Research Kit (dVRK)
Ten learning-based surgical tasks are built in the platform, which are common in the real autonomous surgical execution.
We evaluate SurRoL using RL algorithms in simulation, provide in-depth analysis, deploy the trained policies on the real dVRK, and show that our SurRoL achieves better transferability in the real world.
arXiv Detail & Related papers (2021-08-30T07:43:47Z) - Learning needle insertion from sample task executions [0.0]
The data of robotic surgery can be easily logged where the collected data can be used to learn task models.
We present a needle insertion dataset including 60 successful trials recorded by 3 pair of stereo cameras.
We also present Deep-robot Learning from Demonstrations that predicts the desired state of the robot at the time step after t.
arXiv Detail & Related papers (2021-03-14T14:23:17Z) - Synthetic and Real Inputs for Tool Segmentation in Robotic Surgery [10.562627972607892]
We show that it may be possible to use robot kinematic data coupled with laparoscopic images to alleviate the labelling problem.
We propose a new deep learning based model for parallel processing of both laparoscopic and simulation images.
arXiv Detail & Related papers (2020-07-17T16:33:33Z) - Recurrent and Spiking Modeling of Sparse Surgical Kinematics [0.8458020117487898]
A growing number of studies have used machine learning to analyze video and kinematic data captured from surgical robots.
In this study, we explore the possibility of using only kinematic data to predict surgeons of similar skill levels.
We report that it is possible to identify surgical fellows receiving near perfect scores in the simulation exercises based on their motion characteristics alone.
arXiv Detail & Related papers (2020-05-12T15:41:45Z) - SuPer Deep: A Surgical Perception Framework for Robotic Tissue
Manipulation using Deep Learning for Feature Extraction [25.865648975312407]
We exploit deep learning methods for surgical perception.
We integrated deep neural networks, capable of efficient feature extraction, into the tissue reconstruction and instrument pose estimation processes.
Our framework achieves state-of-the-art tracking performance in a surgical environment by utilizing deep learning for feature extraction.
arXiv Detail & Related papers (2020-03-07T00:08:30Z) - Automatic Gesture Recognition in Robot-assisted Surgery with
Reinforcement Learning and Tree Search [63.07088785532908]
We propose a framework based on reinforcement learning and tree search for joint surgical gesture segmentation and classification.
Our framework consistently outperforms the existing methods on the suturing task of JIGSAWS dataset in terms of accuracy, edit score and F1 score.
arXiv Detail & Related papers (2020-02-20T13:12:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.