Real-time Virtual Intraoperative CT for Image Guided Surgery
- URL: http://arxiv.org/abs/2112.02608v1
- Date: Sun, 5 Dec 2021 16:06:34 GMT
- Title: Real-time Virtual Intraoperative CT for Image Guided Surgery
- Authors: Yangming Li, Neeraja Konuthula, Ian M. Humphreys, Kris Moe, Blake
Hannaford, Randall Bly
- Abstract summary: The work presents three methods, the tip motion-based, the tip trajectory-based, and the instrument based, for virtual intraoperative CT generation.
Surgical results show all three methods improve the Dice Similarity Coefficients > 86%, with F-score > 92% and precision.
The tip trajectory-based method was found to have best performance and reached 96.87% precision in surgical completeness evaluation.
- Score: 13.166023816014777
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Abstract. Purpose: This paper presents a scheme for generating virtual
intraoperative CT scans in order to improve surgical completeness in Endoscopic
Sinus Surgeries (ESS). Approach: The work presents three methods, the tip
motion-based, the tip trajectory-based, and the instrument based, along with
non-parametric smoothing and Gaussian Process Regression, for virtual
intraoperative CT generation. Results: The proposed methods studied and
compared on ESS performed on cadavers. Surgical results show all three methods
improve the Dice Similarity Coefficients > 86%, with F-score > 92% and
precision > 89.91%. The tip trajectory-based method was found to have best
performance and reached 96.87% precision in surgical completeness evaluation.
Conclusions: This work demonstrated that virtual intraoperative CT scans
improves the consistency between the actual surgical scene and the reference
model, and improves surgical completeness in ESS. Comparing with actual
intraoperative CT scans, the proposed scheme has no impact on existing surgical
protocols, does not require extra hardware other than the one is already
available in most ESS overcome the high costs, the repeated radiation, and the
elongated anesthesia caused by actual intraoperative CTs, and is practical in
ESS.
Related papers
- Multimodal Learning With Intraoperative CBCT & Variably Aligned Preoperative CT Data To Improve Segmentation [0.21847754147782888]
Cone-beam computed tomography (CBCT) is an important tool facilitating computer aided interventions.
While the degraded image quality can affect downstream segmentation, the availability of high quality, preoperative scans represents potential for improvements.
We propose a multimodal learning method that fuses roughly aligned CBCT and CT scans and investigate the effect of CBCT quality and misalignment on the final segmentation performance.
arXiv Detail & Related papers (2024-06-17T15:31:54Z) - Deep learning network to correct axial and coronal eye motion in 3D OCT
retinal imaging [65.47834983591957]
We propose deep learning based neural networks to correct axial and coronal motion artifacts in OCT based on a single scan.
The experimental result shows that the proposed method can effectively correct motion artifacts and achieve smaller error than other methods.
arXiv Detail & Related papers (2023-05-27T03:55:19Z) - TranSOP: Transformer-based Multimodal Classification for Stroke
Treatment Outcome Prediction [2.358784542343728]
We propose a transformer-based multimodal network (TranSOP) for a classification approach that employs clinical metadata and imaging information.
This includes a fusion module to efficiently combine 3D non-contrast computed tomography (NCCT) features and clinical information.
In comparative experiments using unimodal and multimodal data, we achieve a state-of-the-art AUC score of 0.85.
arXiv Detail & Related papers (2023-01-25T21:05:10Z) - CTT-Net: A Multi-view Cross-token Transformer for Cataract Postoperative
Visual Acuity Prediction [20.549329151298355]
We propose a novel Cross-token Transformer Network (CTT-Net) for postoperative VA prediction.
To effectively fuse multi-view features of OCT images, we develop cross-token attention that could restrict redundant/unnecessary attention flow.
We use the preoperative VA value to provide more information for postoperative VA prediction and facilitate fusion between views.
arXiv Detail & Related papers (2022-12-12T09:39:22Z) - Surgical Phase Recognition in Laparoscopic Cholecystectomy [57.929132269036245]
We propose a Transformer-based method that utilizes calibrated confidence scores for a 2-stage inference pipeline.
Our method outperforms the baseline model on the Cholec80 dataset, and can be applied to a variety of action segmentation methods.
arXiv Detail & Related papers (2022-06-14T22:55:31Z) - CholecTriplet2021: A benchmark challenge for surgical action triplet
recognition [66.51610049869393]
This paper presents CholecTriplet 2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos.
We present the challenge setup and assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge.
A total of 4 baseline methods and 19 new deep learning algorithms are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%.
arXiv Detail & Related papers (2022-04-10T18:51:55Z) - Incremental Cross-view Mutual Distillation for Self-supervised Medical
CT Synthesis [88.39466012709205]
This paper builds a novel medical slice to increase the between-slice resolution.
Considering that the ground-truth intermediate medical slices are always absent in clinical practice, we introduce the incremental cross-view mutual distillation strategy.
Our method outperforms state-of-the-art algorithms by clear margins.
arXiv Detail & Related papers (2021-12-20T03:38:37Z) - CyTran: A Cycle-Consistent Transformer with Multi-Level Consistency for
Non-Contrast to Contrast CT Translation [56.622832383316215]
We propose a novel approach to translate unpaired contrast computed tomography (CT) scans to non-contrast CT scans.
Our approach is based on cycle-consistent generative adversarial convolutional transformers, for short, CyTran.
Our empirical results show that CyTran outperforms all competing methods.
arXiv Detail & Related papers (2021-10-12T23:25:03Z) - Systematic Clinical Evaluation of A Deep Learning Method for Medical
Image Segmentation: Radiosurgery Application [48.89674088331313]
We systematically evaluate a Deep Learning (DL) method in a 3D medical image segmentation task.
Our method is integrated into the radiosurgery treatment process and directly impacts the clinical workflow.
arXiv Detail & Related papers (2021-08-21T16:15:40Z) - Towards Unified Surgical Skill Assessment [18.601526803020885]
We propose a unified multi-path framework for automatic surgical skill assessment.
We conduct experiments on the JIGSAWS dataset of simulated surgical tasks, and a new clinical dataset of real laparoscopic surgeries.
arXiv Detail & Related papers (2021-06-02T09:06:43Z) - Multi-Scale Supervised 3D U-Net for Kidneys and Kidney Tumor
Segmentation [0.8397730500554047]
We present a multi-scale supervised 3D U-Net, MSS U-Net, to automatically segment kidneys and kidney tumors from CT images.
Our architecture combines deep supervision with exponential logarithmic loss to increase the 3D U-Net training efficiency.
This architecture shows superior performance compared to state-of-the-art works using data from KiTS19 public dataset.
arXiv Detail & Related papers (2020-04-17T08:25:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.