Safe Deep RL for Intraoperative Planning of Pedicle Screw Placement
- URL: http://arxiv.org/abs/2305.05354v2
- Date: Wed, 10 May 2023 13:14:57 GMT
- Title: Safe Deep RL for Intraoperative Planning of Pedicle Screw Placement
- Authors: Yunke Ao, Hooman Esfandiari, Fabio Carrillo, Yarden As, Mazda Farshad,
Benjamin F. Grewe, Andreas Krause, and Philipp Fuernstahl
- Abstract summary: We propose an intraoperative planning approach for robotic spine surgery that leverages real-time observation for drill path planning based on Safe Deep Reinforcement Learning (DRL)
Our approach was capable of achieving 90% bone penetration with respect to the gold standard (GS) drill planning.
- Score: 61.28459114068828
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spinal fusion surgery requires highly accurate implantation of pedicle screw
implants, which must be conducted in critical proximity to vital structures
with a limited view of anatomy. Robotic surgery systems have been proposed to
improve placement accuracy, however, state-of-the-art systems suffer from the
limitations of open-loop approaches, as they follow traditional concepts of
preoperative planning and intraoperative registration, without real-time
recalculation of the surgical plan. In this paper, we propose an intraoperative
planning approach for robotic spine surgery that leverages real-time
observation for drill path planning based on Safe Deep Reinforcement Learning
(DRL). The main contributions of our method are (1) the capability to guarantee
safe actions by introducing an uncertainty-aware distance-based safety filter;
and (2) the ability to compensate for incomplete intraoperative anatomical
information, by encoding a-priori knowledge about anatomical structures with a
network pre-trained on high-fidelity anatomical models. Planning quality was
assessed by quantitative comparison with the gold standard (GS) drill planning.
In experiments with 5 models derived from real magnetic resonance imaging (MRI)
data, our approach was capable of achieving 90% bone penetration with respect
to the GS while satisfying safety requirements, even under observation and
motion uncertainty. To the best of our knowledge, our approach is the first
safe DRL approach focusing on orthopedic surgeries.
Related papers
- Safe Navigation for Robotic Digestive Endoscopy via Human Intervention-based Reinforcement Learning [5.520042381826271]
We propose a Human Intervention (HI)-based Proximal Policy Optimization framework, dubbed HI-PPO, to enhance RDE's safety.
We introduce an Enhanced Exploration Mechanism (EEM) to address the low exploration efficiency of the standard PPO.
We also introduce a reward-penalty adjustment (RPA) to penalize unsafe actions during initial interventions.
arXiv Detail & Related papers (2024-09-24T03:01:30Z) - Hypergraph-Transformer (HGT) for Interactive Event Prediction in
Laparoscopic and Robotic Surgery [50.3022015601057]
We propose a predictive neural network that is capable of understanding and predicting critical interactive aspects of surgical workflow from intra-abdominal video.
We verify our approach on established surgical datasets and applications, including the detection and prediction of action triplets.
Our results demonstrate the superiority of our approach compared to unstructured alternatives.
arXiv Detail & Related papers (2024-02-03T00:58:05Z) - Surgical-DINO: Adapter Learning of Foundation Models for Depth
Estimation in Endoscopic Surgery [12.92291406687467]
We design a foundation model-based depth estimation method, referred to as Surgical-DINO, a low-rank adaptation of the DINOv2 for depth estimation in endoscopic surgery.
We build LoRA layers and integrate them into DINO to adapt with surgery-specific domain knowledge instead of conventional fine-tuning.
Our model is extensively validated on a MICCAI challenge dataset of SCARED, which is collected from da Vinci Xi endoscope surgery.
arXiv Detail & Related papers (2024-01-11T16:22:42Z) - Automatic registration with continuous pose updates for marker-less
surgical navigation in spine surgery [52.63271687382495]
We present an approach that automatically solves the registration problem for lumbar spinal fusion surgery in a radiation-free manner.
A deep neural network was trained to segment the lumbar spine and simultaneously predict its orientation, yielding an initial pose for preoperative models.
An intuitive surgical guidance is provided thanks to the integration into an augmented reality based navigation system.
arXiv Detail & Related papers (2023-08-05T16:26:41Z) - FocalErrorNet: Uncertainty-aware focal modulation network for
inter-modal registration error estimation in ultrasound-guided neurosurgery [3.491999371287298]
Intra-operative tissue deformation (called brain shift) can move the surgical target and render the pre-surgical plan invalid.
We propose a novel deep learning technique based on 3D focal modulation in conjunction with uncertainty estimation to accurately assess MRI-iUS registration errors for brain tumor surgery.
arXiv Detail & Related papers (2023-07-26T21:42:22Z) - GLSFormer : Gated - Long, Short Sequence Transformer for Step
Recognition in Surgical Videos [57.93194315839009]
We propose a vision transformer-based approach to learn temporal features directly from sequence-level patches.
We extensively evaluate our approach on two cataract surgery video datasets, Cataract-101 and D99, and demonstrate superior performance compared to various state-of-the-art methods.
arXiv Detail & Related papers (2023-07-20T17:57:04Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - Quantification of Robotic Surgeries with Vision-Based Deep Learning [45.165919577877695]
We propose a unified deep learning framework, entitled Roboformer, which operates exclusively on videos recorded during surgery.
We validated our framework on four video-based datasets of two commonly-encountered types of steps within minimally-invasive robotic surgeries.
arXiv Detail & Related papers (2022-05-06T06:08:35Z) - Real-time landmark detection for precise endoscopic submucosal
dissection via shape-aware relation network [51.44506007844284]
We propose a shape-aware relation network for accurate and real-time landmark detection in endoscopic submucosal dissection surgery.
We first devise an algorithm to automatically generate relation keypoint heatmaps, which intuitively represent the prior knowledge of spatial relations among landmarks.
We then develop two complementary regularization schemes to progressively incorporate the prior knowledge into the training process.
arXiv Detail & Related papers (2021-11-08T07:57:30Z) - TeCNO: Surgical Phase Recognition with Multi-Stage Temporal
Convolutional Networks [43.95869213955351]
We propose a Multi-Stage Temporal Convolutional Network (MS-TCN) that performs hierarchical prediction refinement for surgical phase recognition.
Our method is thoroughly evaluated on two datasets of laparoscopic cholecystectomy videos with and without the use of additional surgical tool information.
arXiv Detail & Related papers (2020-03-24T10:12:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.