M&M: Unsupervised Mamba-based Mastoidectomy for Cochlear Implant Surgery with Noisy Data
- URL: http://arxiv.org/abs/2407.15787v3
- Date: Sun, 18 Aug 2024 20:38:13 GMT
- Title: M&M: Unsupervised Mamba-based Mastoidectomy for Cochlear Implant Surgery with Noisy Data
- Authors: Yike Zhang, Eduardo Davalos, Dingjie Su, Ange Lou, Jack H. Noble,
- Abstract summary: We propose a method to synthesize the mastoidectomy volume using only the preoperative CT scan, where the mastoid is intact.
Our approach estimates mastoidectomy regions with a mean dice score of 70.0%.
- Score: 3.626734411913593
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cochlear Implant (CI) procedures involve inserting an array of electrodes into the cochlea located inside the inner ear. Mastoidectomy is a surgical procedure that uses a high-speed drill to remove part of the mastoid region of the temporal bone, providing safe access to the cochlea through the middle and inner ear. We aim to develop an intraoperative navigation system that registers plans created using 3D preoperative Computerized Tomography (CT) volumes with the 2D surgical microscope view. Herein, we propose a method to synthesize the mastoidectomy volume using only the preoperative CT scan, where the mastoid is intact. We introduce an unsupervised learning framework designed to synthesize mastoidectomy. For model training purposes, this method uses postoperative CT scans to avoid manual data cleaning or labeling, even when the region removed during mastoidectomy is visible but affected by metal artifacts, low signal-to-noise ratio, or electrode wiring. Our approach estimates mastoidectomy regions with a mean dice score of 70.0%. This approach represents a major step forward for CI intraoperative navigation by predicting realistic mastoidectomy-removed regions in preoperative planning that can be used to register the pre-surgery plan to intraoperative microscopy.
Related papers
- Brain Tumor Segmentation (BraTS) Challenge 2024: Meningioma Radiotherapy Planning Automated Segmentation [47.119513326344126]
The BraTS-MEN-RT challenge aims to advance automated segmentation algorithms using the largest known multi-institutional dataset of radiotherapy planning brain MRIs.
Each case includes a defaced 3D post-contrast T1-weighted radiotherapy planning MRI in its native acquisition space.
Target volume annotations adhere to established radiotherapy planning protocols.
arXiv Detail & Related papers (2024-05-28T17:25:43Z) - Monocular Microscope to CT Registration using Pose Estimation of the
Incus for Augmented Reality Cochlear Implant Surgery [3.8909273404657556]
We develop a method that permits direct 2D-to-3D registration of the view microscope video to the pre-operative Computed Tomography (CT) scan without the need for external tracking equipment.
Our results demonstrate the accuracy with an average rotation error of less than 25 degrees and a translation error of less than 2 mm, 3 mm, and 0.55% for the x, y, and z axes, respectively.
arXiv Detail & Related papers (2024-03-12T00:26:08Z) - An Endoscopic Chisel: Intraoperative Imaging Carves 3D Anatomical Models [8.516340459721484]
We propose a first vision-based approach to update the preoperative 3D anatomical model.
Results show a decrease in error during surgical progression as opposed to increasing when no update is employed.
arXiv Detail & Related papers (2024-02-19T05:06:52Z) - AiAReSeg: Catheter Detection and Segmentation in Interventional
Ultrasound using Transformers [75.20925220246689]
endovascular surgeries are performed using the golden standard of Fluoroscopy, which uses ionising radiation to visualise catheters and vasculature.
This work proposes a solution using an adaptation of a state-of-the-art machine learning transformer architecture to detect and segment catheters in axial interventional Ultrasound image sequences.
arXiv Detail & Related papers (2023-09-25T19:34:12Z) - FocalErrorNet: Uncertainty-aware focal modulation network for
inter-modal registration error estimation in ultrasound-guided neurosurgery [3.491999371287298]
Intra-operative tissue deformation (called brain shift) can move the surgical target and render the pre-surgical plan invalid.
We propose a novel deep learning technique based on 3D focal modulation in conjunction with uncertainty estimation to accurately assess MRI-iUS registration errors for brain tumor surgery.
arXiv Detail & Related papers (2023-07-26T21:42:22Z) - Safe Deep RL for Intraoperative Planning of Pedicle Screw Placement [61.28459114068828]
We propose an intraoperative planning approach for robotic spine surgery that leverages real-time observation for drill path planning based on Safe Deep Reinforcement Learning (DRL)
Our approach was capable of achieving 90% bone penetration with respect to the gold standard (GS) drill planning.
arXiv Detail & Related papers (2023-05-09T11:42:53Z) - Live image-based neurosurgical guidance and roadmap generation using
unsupervised embedding [53.992124594124896]
We present a method for live image-only guidance leveraging a large data set of annotated neurosurgical videos.
A generated roadmap encodes the common anatomical paths taken in surgeries in the training set.
We trained and evaluated the proposed method with a data set of 166 transsphenoidal adenomectomy procedures.
arXiv Detail & Related papers (2023-03-31T12:52:24Z) - Automatic Detection and Segmentation of Postoperative Cerebellar Damage
Based on Normalization [1.1470070927586016]
A reliable localization and measure of cerebellar damage is fundamental to study the relationship between the damaged cerebellar regions and postoperative neurological outcomes.
Existing cerebellum normalization methods are not reliable on postoperative scans, therefore current approaches to measure surgical damage rely on manual labelling.
We develop a robust algorithm to automatically detect and measure cerebellum damage due to surgery using postoperative 3D T1 magnetic resonance imaging.
arXiv Detail & Related papers (2022-03-03T22:26:59Z) - A unified 3D framework for Organs at Risk Localization and Segmentation
for Radiation Therapy Planning [56.52933974838905]
Current medical workflow requires manual delineation of organs-at-risk (OAR)
In this work, we aim to introduce a unified 3D pipeline for OAR localization-segmentation.
Our proposed framework fully enables the exploitation of 3D context information inherent in medical imaging.
arXiv Detail & Related papers (2022-03-01T17:08:41Z) - Segmentation of the Myocardium on Late-Gadolinium Enhanced MRI based on
2.5 D Residual Squeeze and Excitation Deep Learning Model [55.09533240649176]
The aim of this work is to develop an accurate automatic segmentation method based on deep learning models for the myocardial borders on LGE-MRI.
A total number of 320 exams (with a mean number of 6 slices per exam) were used for training and 28 exams used for testing.
The performance analysis of the proposed ensemble model in the basal and middle slices was similar as compared to intra-observer study and slightly lower at apical slices.
arXiv Detail & Related papers (2020-05-27T20:44:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.