Live image-based neurosurgical guidance and roadmap generation using
unsupervised embedding
- URL: http://arxiv.org/abs/2303.18019v1
- Date: Fri, 31 Mar 2023 12:52:24 GMT
- Title: Live image-based neurosurgical guidance and roadmap generation using
unsupervised embedding
- Authors: Gary Sarwin, Alessandro Carretta, Victor Staartjes, Matteo Zoli, Diego
Mazzatenta, Luca Regli, Carlo Serra, Ender Konukoglu
- Abstract summary: We present a method for live image-only guidance leveraging a large data set of annotated neurosurgical videos.
A generated roadmap encodes the common anatomical paths taken in surgeries in the training set.
We trained and evaluated the proposed method with a data set of 166 transsphenoidal adenomectomy procedures.
- Score: 53.992124594124896
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advanced minimally invasive neurosurgery navigation relies mainly on Magnetic
Resonance Imaging (MRI) guidance. MRI guidance, however, only provides
pre-operative information in the majority of the cases. Once the surgery
begins, the value of this guidance diminishes to some extent because of the
anatomical changes due to surgery. Guidance with live image feedback coming
directly from the surgical device, e.g., endoscope, can complement MRI-based
navigation or be an alternative if MRI guidance is not feasible. With this
motivation, we present a method for live image-only guidance leveraging a large
data set of annotated neurosurgical videos.First, we report the performance of
a deep learning-based object detection method, YOLO, on detecting anatomical
structures in neurosurgical images. Second, we present a method for generating
neurosurgical roadmaps using unsupervised embedding without assuming exact
anatomical matches between patients, presence of an extensive anatomical atlas,
or the need for simultaneous localization and mapping. A generated roadmap
encodes the common anatomical paths taken in surgeries in the training set. At
inference, the roadmap can be used to map a surgeon's current location using
live image feedback on the path to provide guidance by being able to predict
which structures should appear going forward or backward, much like a mapping
application. Even though the embedding is not supervised by position
information, we show that it is correlated to the location inside the brain and
on the surgical path. We trained and evaluated the proposed method with a data
set of 166 transsphenoidal adenomectomy procedures.
Related papers
- Intraoperative Registration by Cross-Modal Inverse Neural Rendering [61.687068931599846]
We present a novel approach for 3D/2D intraoperative registration during neurosurgery via cross-modal inverse neural rendering.
Our approach separates implicit neural representation into two components, handling anatomical structure preoperatively and appearance intraoperatively.
We tested our method on retrospective patients' data from clinical cases, showing that our method outperforms state-of-the-art while meeting current clinical standards for registration.
arXiv Detail & Related papers (2024-09-18T13:40:59Z) - Vision-Based Neurosurgical Guidance: Unsupervised Localization and Camera-Pose Prediction [41.91807060434709]
Localizing oneself during endoscopic procedures can be problematic due to the lack of distinguishable textures and landmarks.
We present a deep learning method based on anatomy recognition, that constructs a surgical path in an unsupervised manner from surgical videos.
arXiv Detail & Related papers (2024-05-15T14:09:11Z) - Real-time guidewire tracking and segmentation in intraoperative x-ray [52.51797358201872]
We propose a two-stage deep learning framework for real-time guidewire segmentation and tracking.
In the first stage, a Yolov5 detector is trained, using the original X-ray images as well as synthetic ones, to output the bounding boxes of possible target guidewires.
In the second stage, a novel and efficient network is proposed to segment the guidewire in each detected bounding box.
arXiv Detail & Related papers (2024-04-12T20:39:19Z) - Monocular Microscope to CT Registration using Pose Estimation of the
Incus for Augmented Reality Cochlear Implant Surgery [3.8909273404657556]
We develop a method that permits direct 2D-to-3D registration of the view microscope video to the pre-operative Computed Tomography (CT) scan without the need for external tracking equipment.
Our results demonstrate the accuracy with an average rotation error of less than 25 degrees and a translation error of less than 2 mm, 3 mm, and 0.55% for the x, y, and z axes, respectively.
arXiv Detail & Related papers (2024-03-12T00:26:08Z) - Attentive Symmetric Autoencoder for Brain MRI Segmentation [56.02577247523737]
We propose a novel Attentive Symmetric Auto-encoder based on Vision Transformer (ViT) for 3D brain MRI segmentation tasks.
In the pre-training stage, the proposed auto-encoder pays more attention to reconstruct the informative patches according to the gradient metrics.
Experimental results show that our proposed attentive symmetric auto-encoder outperforms the state-of-the-art self-supervised learning methods and medical image segmentation models.
arXiv Detail & Related papers (2022-09-19T09:43:19Z) - A Long Short-term Memory Based Recurrent Neural Network for
Interventional MRI Reconstruction [50.1787181309337]
We propose a convolutional long short-term memory (Conv-LSTM) based recurrent neural network (RNN), or ConvLR, to reconstruct interventional images with golden-angle radial sampling.
The proposed algorithm has the potential to achieve real-time i-MRI for DBS and can be used for general purpose MR-guided intervention.
arXiv Detail & Related papers (2022-03-28T14:03:45Z) - Unsupervised Region-based Anomaly Detection in Brain MRI with
Adversarial Image Inpainting [4.019851137611981]
This paper proposes a fully automatic, unsupervised inpainting-based brain tumour segmentation system for T1-weighted MRI.
First, a deep convolutional neural network (DCNN) is trained to reconstruct missing healthy brain regions. Then, anomalous regions are determined by identifying areas of highest reconstruction loss.
We show the proposed system is able to segment various sized and abstract tumours and achieves a mean and standard deviation Dice score of 0.771 and 0.176, respectively.
arXiv Detail & Related papers (2020-10-05T12:13:44Z) - Towards Unsupervised Learning for Instrument Segmentation in Robotic
Surgery with Cycle-Consistent Adversarial Networks [54.00217496410142]
We propose an unpaired image-to-image translation where the goal is to learn the mapping between an input endoscopic image and a corresponding annotation.
Our approach allows to train image segmentation models without the need to acquire expensive annotations.
We test our proposed method on Endovis 2017 challenge dataset and show that it is competitive with supervised segmentation methods.
arXiv Detail & Related papers (2020-07-09T01:39:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.