Surgical Gym: A high-performance GPU-based platform for reinforcement
learning with surgical robots
- URL: http://arxiv.org/abs/2310.04676v2
- Date: Sat, 27 Jan 2024 18:07:37 GMT
- Title: Surgical Gym: A high-performance GPU-based platform for reinforcement
learning with surgical robots
- Authors: Samuel Schmidgall, Axel Krieger, Jason Eshraghian
- Abstract summary: We introduce Surgical Gym, an open-source high performance platform for surgical robot learning.
We demonstrate between 100-5000x faster training times compared with previous surgical learning platforms.
- Score: 1.6415802723978308
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in robot-assisted surgery have resulted in progressively more
precise, efficient, and minimally invasive procedures, sparking a new era of
robotic surgical intervention. This enables doctors, in collaborative
interaction with robots, to perform traditional or minimally invasive surgeries
with improved outcomes through smaller incisions. Recent efforts are working
toward making robotic surgery more autonomous which has the potential to reduce
variability of surgical outcomes and reduce complication rates. Deep
reinforcement learning methodologies offer scalable solutions for surgical
automation, but their effectiveness relies on extensive data acquisition due to
the absence of prior knowledge in successfully accomplishing tasks. Due to the
intensive nature of simulated data collection, previous works have focused on
making existing algorithms more efficient. In this work, we focus on making the
simulator more efficient, making training data much more accessible than
previously possible. We introduce Surgical Gym, an open-source high performance
platform for surgical robot learning where both the physics simulation and
reinforcement learning occur directly on the GPU. We demonstrate between
100-5000x faster training times compared with previous surgical learning
platforms. The code is available at:
https://github.com/SamuelSchmidgall/SurgicalGym.
Related papers
- Simulation-Aided Policy Tuning for Black-Box Robot Learning [47.83474891747279]
We present a novel black-box policy search algorithm focused on data-efficient policy improvements.
The algorithm learns directly on the robot and treats simulation as an additional information source to speed up the learning process.
We show fast and successful task learning on a robot manipulator with the aid of an imperfect simulator.
arXiv Detail & Related papers (2024-11-21T15:52:23Z) - Offline Imitation Learning Through Graph Search and Retrieval [57.57306578140857]
Imitation learning is a powerful machine learning algorithm for a robot to acquire manipulation skills.
We propose GSR, a simple yet effective algorithm that learns from suboptimal demonstrations through Graph Search and Retrieval.
GSR can achieve a 10% to 30% higher success rate and over 30% higher proficiency compared to baselines.
arXiv Detail & Related papers (2024-07-22T06:12:21Z) - Toward a Surgeon-in-the-Loop Ophthalmic Robotic Apprentice using Reinforcement and Imitation Learning [18.72371138886818]
We propose an image-guided approach for surgeon-centered autonomous agents during ophthalmic cataract surgery.
By integrating the surgeon's actions and preferences into the training process, our approach enables the robot to implicitly learn and adapt to the individual surgeon's unique techniques.
arXiv Detail & Related papers (2023-11-29T15:00:06Z) - Autonomous Soft Tissue Retraction Using Demonstration-Guided
Reinforcement Learning [6.80186731352488]
Existing surgical task learning mainly pertains to rigid body interactions.
The advancement towards more sophisticated surgical robots necessitates the manipulation of soft bodies.
This work lays the foundation for future research into the development and refinement of surgical robots capable of managing both rigid and soft tissue interactions.
arXiv Detail & Related papers (2023-09-02T06:13:58Z) - Surgical tool classification and localization: results and methods from
the MICCAI 2022 SurgToolLoc challenge [69.91670788430162]
We present the results of the SurgLoc 2022 challenge.
The goal was to leverage tool presence data as weak labels for machine learning models trained to detect tools.
We conclude by discussing these results in the broader context of machine learning and surgical data science.
arXiv Detail & Related papers (2023-05-11T21:44:39Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Demonstration-Guided Reinforcement Learning with Efficient Exploration
for Task Automation of Surgical Robot [54.80144694888735]
We introduce Demonstration-guided EXploration (DEX), an efficient reinforcement learning algorithm.
Our method estimates expert-like behaviors with higher values to facilitate productive interactions.
Experiments on $10$ surgical manipulation tasks from SurRoL, a comprehensive surgical simulation platform, demonstrate significant improvements.
arXiv Detail & Related papers (2023-02-20T05:38:54Z) - Human-in-the-loop Embodied Intelligence with Interactive Simulation
Environment for Surgical Robot Learning [19.390115282150337]
We study human-in-the-loop embodied intelligence with a new interactive simulation platform for surgical robot learning.
Specifically, we establish our platform based on our previously released SurRoL simulator with several new features.
We showcase the improvement of our simulation environment with the designed new features, and validate effectiveness of incorporating human factors in embodied intelligence.
arXiv Detail & Related papers (2023-01-01T18:05:25Z) - Deep Reinforcement Learning Based Semi-Autonomous Control for Robotic
Surgery [13.940778824773414]
We propose a deep reinforcement learning-based semi-autonomous control framework for robotic surgery.
The framework can reduce the completion time by 19.1% and the travel length by 58.7%.
arXiv Detail & Related papers (2022-04-11T22:59:33Z) - SurRoL: An Open-source Reinforcement Learning Centered and dVRK
Compatible Platform for Surgical Robot Learning [78.76052604441519]
SurRoL is an RL-centered simulation platform for surgical robot learning compatible with the da Vinci Research Kit (dVRK)
Ten learning-based surgical tasks are built in the platform, which are common in the real autonomous surgical execution.
We evaluate SurRoL using RL algorithms in simulation, provide in-depth analysis, deploy the trained policies on the real dVRK, and show that our SurRoL achieves better transferability in the real world.
arXiv Detail & Related papers (2021-08-30T07:43:47Z) - Recurrent and Spiking Modeling of Sparse Surgical Kinematics [0.8458020117487898]
A growing number of studies have used machine learning to analyze video and kinematic data captured from surgical robots.
In this study, we explore the possibility of using only kinematic data to predict surgeons of similar skill levels.
We report that it is possible to identify surgical fellows receiving near perfect scores in the simulation exercises based on their motion characteristics alone.
arXiv Detail & Related papers (2020-05-12T15:41:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.