Learning needle insertion from sample task executions
- URL: http://arxiv.org/abs/2103.07938v1
- Date: Sun, 14 Mar 2021 14:23:17 GMT
- Title: Learning needle insertion from sample task executions
- Authors: Amir Ghalamzan-E
- Abstract summary: The data of robotic surgery can be easily logged where the collected data can be used to learn task models.
We present a needle insertion dataset including 60 successful trials recorded by 3 pair of stereo cameras.
We also present Deep-robot Learning from Demonstrations that predicts the desired state of the robot at the time step after t.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automating a robotic task, e.g., robotic suturing can be very complex and
time-consuming. Learning a task model to autonomously perform the task is
invaluable making the technology, robotic surgery, accessible for a wider
community. The data of robotic surgery can be easily logged where the collected
data can be used to learn task models. This will result in reduced time and
cost of robotic surgery in which a surgeon can supervise the robot operation or
give high-level commands instead of low-level control of the tools. We present
a data-set of needle insertion in soft tissue with two arms where Arm 1 inserts
the needle into the tissue and Arm 2 actively manipulate the soft tissue to
ensure the desired and actual exit points are the same. This is important in
real-surgery because suturing without active manipulation of tissue may yield
failure of the suturing as the stitch may not grip enough tissue to resist the
force applied for the suturing. We present a needle insertion dataset including
60 successful trials recorded by 3 pair of stereo cameras. Moreover, we present
Deep-robot Learning from Demonstrations that predicts the desired state of the
robot at the time step after t (which the optimal action taken at t yields) by
looking at the video of the past time steps, i.e. n step time history where N
is the memory time window, of the task execution. The experimental results
illustrate our proposed deep model architecture is outperforming the existing
methods. Although the solution is not yet ready to be deployed on a real robot,
the results indicate the possibility of future development for real robot
deployment.
Related papers
- General-purpose foundation models for increased autonomy in
robot-assisted surgery [4.155479231940454]
This perspective article aims to provide a path toward increasing robot autonomy in robot-assisted surgery.
We argue that surgical robots are uniquely positioned to benefit from general-purpose models and provide three guiding actions toward increased autonomy in robot-assisted surgery.
arXiv Detail & Related papers (2024-01-01T06:15:16Z) - Autonomous Soft Tissue Retraction Using Demonstration-Guided
Reinforcement Learning [6.80186731352488]
Existing surgical task learning mainly pertains to rigid body interactions.
The advancement towards more sophisticated surgical robots necessitates the manipulation of soft bodies.
This work lays the foundation for future research into the development and refinement of surgical robots capable of managing both rigid and soft tissue interactions.
arXiv Detail & Related papers (2023-09-02T06:13:58Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Surgical tool classification and localization: results and methods from
the MICCAI 2022 SurgToolLoc challenge [69.91670788430162]
We present the results of the SurgLoc 2022 challenge.
The goal was to leverage tool presence data as weak labels for machine learning models trained to detect tools.
We conclude by discussing these results in the broader context of machine learning and surgical data science.
arXiv Detail & Related papers (2023-05-11T21:44:39Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z) - Using Conditional Generative Adversarial Networks to Reduce the Effects
of Latency in Robotic Telesurgery [0.0]
In surgery, any micro-delay can injure a patient severely and in some cases, result in fatality.
Current surgical robots use calibrated sensors to measure the position of the arms and tools.
In this work we present a purely optical approach that provides a measurement of the tool position in relation to the patient's tissues.
arXiv Detail & Related papers (2020-10-07T13:40:44Z) - Synthetic and Real Inputs for Tool Segmentation in Robotic Surgery [10.562627972607892]
We show that it may be possible to use robot kinematic data coupled with laparoscopic images to alleviate the labelling problem.
We propose a new deep learning based model for parallel processing of both laparoscopic and simulation images.
arXiv Detail & Related papers (2020-07-17T16:33:33Z) - Recurrent and Spiking Modeling of Sparse Surgical Kinematics [0.8458020117487898]
A growing number of studies have used machine learning to analyze video and kinematic data captured from surgical robots.
In this study, we explore the possibility of using only kinematic data to predict surgeons of similar skill levels.
We report that it is possible to identify surgical fellows receiving near perfect scores in the simulation exercises based on their motion characteristics alone.
arXiv Detail & Related papers (2020-05-12T15:41:45Z) - Scalable Multi-Task Imitation Learning with Autonomous Improvement [159.9406205002599]
We build an imitation learning system that can continuously improve through autonomous data collection.
We leverage the robot's own trials as demonstrations for tasks other than the one that the robot actually attempted.
In contrast to prior imitation learning approaches, our method can autonomously collect data with sparse supervision for continuous improvement.
arXiv Detail & Related papers (2020-02-25T18:56:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.