End-to-End assessment of AR-assisted neurosurgery systems
- URL: http://arxiv.org/abs/2311.01912v1
- Date: Fri, 3 Nov 2023 13:41:44 GMT
- Title: End-to-End assessment of AR-assisted neurosurgery systems
- Authors: Mahdi Bagheri, Farhad Piri, Hadi Digale, Saem Sattarzadeh, Mohammad
Reza Mohammadi
- Abstract summary: We classify different techniques for assessing an AR-assisted neurosurgery system and propose a new technique to systematize the assessment procedure.
We found that although the system can undergo registration and tracking errors, physical feedback can significantly reduce the error caused by hologram displacement.
The lack of visual feedback on the hologram does not have a significant effect on the user 3D perception.
- Score: 0.5892638927736115
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Augmented Reality (AR) has emerged as a significant advancement in surgical
procedures, offering a solution to the challenges posed by traditional
neuronavigation methods. These conventional techniques often necessitate
surgeons to split their focus between the surgical site and a separate monitor
that displays guiding images. Over the years, many systems have been developed
to register and track the hologram at the targeted locations, each employed its
own evaluation technique. On the other hand, hologram displacement measurement
is not a straightforward task because of various factors such as occlusion,
Vengence-Accomodation Conflict, and unstable holograms in space. In this study,
we explore and classify different techniques for assessing an AR-assisted
neurosurgery system and propose a new technique to systematize the assessment
procedure. Moreover, we conduct a deeper investigation to assess surgeon error
in the pre- and intra-operative phases of the surgery based on the respective
feedback given. We found that although the system can undergo registration and
tracking errors, physical feedback can significantly reduce the error caused by
hologram displacement. However, the lack of visual feedback on the hologram
does not have a significant effect on the user 3D perception.
Related papers
- Monocular Microscope to CT Registration using Pose Estimation of the
Incus for Augmented Reality Cochlear Implant Surgery [3.8909273404657556]
We develop a method that permits direct 2D-to-3D registration of the view microscope video to the pre-operative Computed Tomography (CT) scan without the need for external tracking equipment.
Our results demonstrate the accuracy with an average rotation error of less than 25 degrees and a translation error of less than 2 mm, 3 mm, and 0.55% for the x, y, and z axes, respectively.
arXiv Detail & Related papers (2024-03-12T00:26:08Z) - Action Recognition in Video Recordings from Gynecologic Laparoscopy [4.002010889177872]
Action recognition is a prerequisite for many applications in laparoscopic video analysis.
In this study, we design and evaluate a CNN-RNN architecture as well as a customized training-inference framework.
arXiv Detail & Related papers (2023-11-30T16:15:46Z) - Automatic registration with continuous pose updates for marker-less
surgical navigation in spine surgery [52.63271687382495]
We present an approach that automatically solves the registration problem for lumbar spinal fusion surgery in a radiation-free manner.
A deep neural network was trained to segment the lumbar spine and simultaneously predict its orientation, yielding an initial pose for preoperative models.
An intuitive surgical guidance is provided thanks to the integration into an augmented reality based navigation system.
arXiv Detail & Related papers (2023-08-05T16:26:41Z) - Next-generation Surgical Navigation: Marker-less Multi-view 6DoF Pose
Estimation of Surgical Instruments [66.74633676595889]
We present a multi-camera capture setup consisting of static and head-mounted cameras.
Second, we publish a multi-view RGB-D video dataset of ex-vivo spine surgeries, captured in a surgical wet lab and a real operating theatre.
Third, we evaluate three state-of-the-art single-view and multi-view methods for the task of 6DoF pose estimation of surgical instruments.
arXiv Detail & Related papers (2023-05-05T13:42:19Z) - Live image-based neurosurgical guidance and roadmap generation using
unsupervised embedding [53.992124594124896]
We present a method for live image-only guidance leveraging a large data set of annotated neurosurgical videos.
A generated roadmap encodes the common anatomical paths taken in surgeries in the training set.
We trained and evaluated the proposed method with a data set of 166 transsphenoidal adenomectomy procedures.
arXiv Detail & Related papers (2023-03-31T12:52:24Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - A Temporal Learning Approach to Inpainting Endoscopic Specularities and
Its effect on Image Correspondence [13.25903945009516]
We propose using a temporal generative adversarial network (GAN) to inpaint the hidden anatomy under specularities.
This is achieved using in-vivo data of gastric endoscopy (Hyper-Kvasir) in a fully unsupervised manner.
We also assess the effect of our method in computer vision tasks that underpin 3D reconstruction and camera motion estimation.
arXiv Detail & Related papers (2022-03-31T13:14:00Z) - 3D endoscopic depth estimation using 3D surface-aware constraints [16.161276518580262]
We show that depth estimation can be reformed from a 3D surface perspective.
We propose a loss function for depth estimation that integrates the surface-aware constraints.
Camera parameters are incorporated into the training pipeline to increase the control and transparency of the depth estimation.
arXiv Detail & Related papers (2022-03-04T04:47:20Z) - An Interpretable Multiple-Instance Approach for the Detection of
referable Diabetic Retinopathy from Fundus Images [72.94446225783697]
We propose a machine learning system for the detection of referable Diabetic Retinopathy in fundus images.
By extracting local information from image patches and combining it efficiently through an attention mechanism, our system is able to achieve high classification accuracy.
We evaluate our approach on publicly available retinal image datasets, in which it exhibits near state-of-the-art performance.
arXiv Detail & Related papers (2021-03-02T13:14:15Z) - Robust Medical Instrument Segmentation Challenge 2019 [56.148440125599905]
Intraoperative tracking of laparoscopic instruments is often a prerequisite for computer and robotic-assisted interventions.
Our challenge was based on a surgical data set comprising 10,040 annotated images acquired from a total of 30 surgical procedures.
The results confirm the initial hypothesis, namely that algorithm performance degrades with an increasing domain gap.
arXiv Detail & Related papers (2020-03-23T14:35:08Z) - Spatiotemporal-Aware Augmented Reality: Redefining HCI in Image-Guided
Therapy [39.370739217840594]
Augmented reality (AR) has been introduced in the operating rooms in the last decade.
This paper shows how exemplary visualization are redefined by taking full advantage of head-mounted displays.
The awareness of the system from the geometric and physical characteristics of X-ray imaging allows the redefinition of different human-machine interfaces.
arXiv Detail & Related papers (2020-03-04T18:59:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.