Autonomously Navigating a Surgical Tool Inside the Eye by Learning from
Demonstration
- URL: http://arxiv.org/abs/2011.07785v1
- Date: Mon, 16 Nov 2020 08:30:02 GMT
- Title: Autonomously Navigating a Surgical Tool Inside the Eye by Learning from
Demonstration
- Authors: Ji Woong Kim, Changyan He, Muller Urias, Peter Gehlbach, Gregory D.
Hager, Iulian Iordachita, Marin Kobilarov
- Abstract summary: We propose to automate the tool-navigation task by learning to mimic expert demonstrations of the task.
A deep network is trained to imitate expert trajectories toward various locations on the retina based on recorded visual servoing to a given goal specified by the user.
We show that the network can reliably navigate a needle surgical tool to various desired locations within 137 microns accuracy in physical experiments and 94 microns in simulation on average.
- Score: 28.720332497794292
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A fundamental challenge in retinal surgery is safely navigating a surgical
tool to a desired goal position on the retinal surface while avoiding damage to
surrounding tissues, a procedure that typically requires tens-of-microns
accuracy. In practice, the surgeon relies on depth-estimation skills to
localize the tool-tip with respect to the retina in order to perform the
tool-navigation task, which can be prone to human error. To alleviate such
uncertainty, prior work has introduced ways to assist the surgeon by estimating
the tool-tip distance to the retina and providing haptic or auditory feedback.
However, automating the tool-navigation task itself remains unsolved and
largely unexplored. Such a capability, if reliably automated, could serve as a
building block to streamline complex procedures and reduce the chance for
tissue damage. Towards this end, we propose to automate the tool-navigation
task by learning to mimic expert demonstrations of the task. Specifically, a
deep network is trained to imitate expert trajectories toward various locations
on the retina based on recorded visual servoing to a given goal specified by
the user. The proposed autonomous navigation system is evaluated in simulation
and in physical experiments using a silicone eye phantom. We show that the
network can reliably navigate a needle surgical tool to various desired
locations within 137 microns accuracy in physical experiments and 94 microns in
simulation on average, and generalizes well to unseen situations such as in the
presence of auxiliary surgical tools, variable eye backgrounds, and brightness
conditions.
Related papers
- Automated Surgical Skill Assessment in Endoscopic Pituitary Surgery using Real-time Instrument Tracking on a High-fidelity Bench-top Phantom [9.41936397281689]
Improved surgical skill is generally associated with improved patient outcomes, but assessment is subjective and labour-intensive.
A new public dataset is introduced, focusing on simulated surgery, using the nasal phase of endoscopic pituitary surgery as an exemplar.
A Multilayer Perceptron achieved 87% accuracy in predicting surgical skill level (novice or expert), with the "ratio of total procedure time to instrument visible time" correlated with higher surgical skill.
arXiv Detail & Related papers (2024-09-25T15:27:44Z) - EyeLS: Shadow-Guided Instrument Landing System for Intraocular Target
Approaching in Robotic Eye Surgery [51.05595735405451]
Robotic ophthalmic surgery is an emerging technology to facilitate high-precision interventions such as retina penetration in subretinal injection and removal of floating tissues in retinal detachment.
Current image-based methods cannot effectively estimate the needle tip's trajectory towards both retinal and floating targets.
We propose to use the shadow positions of the target and the instrument tip to estimate their relative depth position.
Our method succeeds target approaching on a retina model, and achieves an average depth error of 0.0127 mm and 0.3473 mm for floating and retinal targets respectively in the surgical simulator.
arXiv Detail & Related papers (2023-11-15T09:11:37Z) - Collaborative Robotic Biopsy with Trajectory Guidance and Needle Tip
Force Feedback [49.32653090178743]
We present a collaborative robotic biopsy system that combines trajectory guidance with kinesthetic feedback to assist the physician in needle placement.
A needle design that senses forces at the needle tip based on optical coherence tomography and machine learning for real-time data processing.
We demonstrate that even smaller, deep target structures can be accurately sampled by performing post-mortem in situ biopsies of the pancreas.
arXiv Detail & Related papers (2023-06-12T14:07:53Z) - Surgical tool classification and localization: results and methods from
the MICCAI 2022 SurgToolLoc challenge [69.91670788430162]
We present the results of the SurgLoc 2022 challenge.
The goal was to leverage tool presence data as weak labels for machine learning models trained to detect tools.
We conclude by discussing these results in the broader context of machine learning and surgical data science.
arXiv Detail & Related papers (2023-05-11T21:44:39Z) - Live image-based neurosurgical guidance and roadmap generation using
unsupervised embedding [53.992124594124896]
We present a method for live image-only guidance leveraging a large data set of annotated neurosurgical videos.
A generated roadmap encodes the common anatomical paths taken in surgeries in the training set.
We trained and evaluated the proposed method with a data set of 166 transsphenoidal adenomectomy procedures.
arXiv Detail & Related papers (2023-03-31T12:52:24Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - Autonomous Intraluminal Navigation of a Soft Robot using
Deep-Learning-based Visual Servoing [13.268863900187025]
We present a synergic solution for intraluminal navigation consisting of a 3D printed endoscopic soft robot.
Visual servoing, based on Convolutional Neural Networks (CNNs), is used to achieve the autonomous navigation task.
The proposed robot is validated in anatomical phantoms in different path configurations.
arXiv Detail & Related papers (2022-07-01T13:17:45Z) - Searching for Efficient Architecture for Instrument Segmentation in
Robotic Surgery [58.63306322525082]
Most applications rely on accurate real-time segmentation of high-resolution surgical images.
We design a light-weight and highly-efficient deep residual architecture which is tuned to perform real-time inference of high-resolution images.
arXiv Detail & Related papers (2020-07-08T21:38:29Z) - SuPer Deep: A Surgical Perception Framework for Robotic Tissue
Manipulation using Deep Learning for Feature Extraction [25.865648975312407]
We exploit deep learning methods for surgical perception.
We integrated deep neural networks, capable of efficient feature extraction, into the tissue reconstruction and instrument pose estimation processes.
Our framework achieves state-of-the-art tracking performance in a surgical environment by utilizing deep learning for feature extraction.
arXiv Detail & Related papers (2020-03-07T00:08:30Z) - Registration made easy -- standalone orthopedic navigation with HoloLens [27.180079923996406]
We propose a surgical navigation approach comprising intraoperative surface digitization for registration and intuitive holographic navigation for pedicle screw placement that runs entirely on the Microsoft HoloLens.
Preliminary results from phantom experiments suggest that the method may meet clinical accuracy requirements.
arXiv Detail & Related papers (2020-01-17T09:22:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.