Registration made easy -- standalone orthopedic navigation with HoloLens
- URL: http://arxiv.org/abs/2001.06209v1
- Date: Fri, 17 Jan 2020 09:22:21 GMT
- Title: Registration made easy -- standalone orthopedic navigation with HoloLens
- Authors: Florentin Liebmann, Simon Roner, Marco von Atzigen, Florian
Wanivenhaus, Caroline Neuhaus, Jos\'e Spirig, Davide Scaramuzza, Reto Sutter,
Jess Snedeker, Mazda Farshad, Philipp F\"urnstahl
- Abstract summary: We propose a surgical navigation approach comprising intraoperative surface digitization for registration and intuitive holographic navigation for pedicle screw placement that runs entirely on the Microsoft HoloLens.
Preliminary results from phantom experiments suggest that the method may meet clinical accuracy requirements.
- Score: 27.180079923996406
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In surgical navigation, finding correspondence between preoperative plan and
intraoperative anatomy, the so-called registration task, is imperative. One
promising approach is to intraoperatively digitize anatomy and register it with
the preoperative plan. State-of-the-art commercial navigation systems implement
such approaches for pedicle screw placement in spinal fusion surgery. Although
these systems improve surgical accuracy, they are not gold standard in clinical
practice. Besides economical reasons, this may be due to their difficult
integration into clinical workflows and unintuitive navigation feedback.
Augmented Reality has the potential to overcome these limitations.
Consequently, we propose a surgical navigation approach comprising
intraoperative surface digitization for registration and intuitive holographic
navigation for pedicle screw placement that runs entirely on the Microsoft
HoloLens. Preliminary results from phantom experiments suggest that the method
may meet clinical accuracy requirements.
Related papers
- Automated Surgical Skill Assessment in Endoscopic Pituitary Surgery using Real-time Instrument Tracking on a High-fidelity Bench-top Phantom [9.41936397281689]
Improved surgical skill is generally associated with improved patient outcomes, but assessment is subjective and labour-intensive.
A new public dataset is introduced, focusing on simulated surgery, using the nasal phase of endoscopic pituitary surgery as an exemplar.
A Multilayer Perceptron achieved 87% accuracy in predicting surgical skill level (novice or expert), with the "ratio of total procedure time to instrument visible time" correlated with higher surgical skill.
arXiv Detail & Related papers (2024-09-25T15:27:44Z) - ViTALS: Vision Transformer for Action Localization in Surgical Nephrectomy [7.145773305697571]
We introduce a new dataset of nephrectomy surgeries called UroSlice.
To perform the action localization from these videos, we propose a novel model termed as ViTALS'
Our model incorporates hierarchical dilated temporal convolution layers and inter-layer residual connections to capture the temporal correlations at finer as well as coarser granularities.
arXiv Detail & Related papers (2024-05-04T05:07:39Z) - Monocular Microscope to CT Registration using Pose Estimation of the
Incus for Augmented Reality Cochlear Implant Surgery [3.8909273404657556]
We develop a method that permits direct 2D-to-3D registration of the view microscope video to the pre-operative Computed Tomography (CT) scan without the need for external tracking equipment.
Our results demonstrate the accuracy with an average rotation error of less than 25 degrees and a translation error of less than 2 mm, 3 mm, and 0.55% for the x, y, and z axes, respectively.
arXiv Detail & Related papers (2024-03-12T00:26:08Z) - Automatic registration with continuous pose updates for marker-less
surgical navigation in spine surgery [52.63271687382495]
We present an approach that automatically solves the registration problem for lumbar spinal fusion surgery in a radiation-free manner.
A deep neural network was trained to segment the lumbar spine and simultaneously predict its orientation, yielding an initial pose for preoperative models.
An intuitive surgical guidance is provided thanks to the integration into an augmented reality based navigation system.
arXiv Detail & Related papers (2023-08-05T16:26:41Z) - Safe Deep RL for Intraoperative Planning of Pedicle Screw Placement [61.28459114068828]
We propose an intraoperative planning approach for robotic spine surgery that leverages real-time observation for drill path planning based on Safe Deep Reinforcement Learning (DRL)
Our approach was capable of achieving 90% bone penetration with respect to the gold standard (GS) drill planning.
arXiv Detail & Related papers (2023-05-09T11:42:53Z) - Live image-based neurosurgical guidance and roadmap generation using
unsupervised embedding [53.992124594124896]
We present a method for live image-only guidance leveraging a large data set of annotated neurosurgical videos.
A generated roadmap encodes the common anatomical paths taken in surgeries in the training set.
We trained and evaluated the proposed method with a data set of 166 transsphenoidal adenomectomy procedures.
arXiv Detail & Related papers (2023-03-31T12:52:24Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - Quantification of Robotic Surgeries with Vision-Based Deep Learning [45.165919577877695]
We propose a unified deep learning framework, entitled Roboformer, which operates exclusively on videos recorded during surgery.
We validated our framework on four video-based datasets of two commonly-encountered types of steps within minimally-invasive robotic surgeries.
arXiv Detail & Related papers (2022-05-06T06:08:35Z) - CholecTriplet2021: A benchmark challenge for surgical action triplet
recognition [66.51610049869393]
This paper presents CholecTriplet 2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos.
We present the challenge setup and assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge.
A total of 4 baseline methods and 19 new deep learning algorithms are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%.
arXiv Detail & Related papers (2022-04-10T18:51:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.