A Review on Organ Deformation Modeling Approaches for Reliable Surgical Navigation using Augmented Reality
- URL: http://arxiv.org/abs/2408.02713v1
- Date: Mon, 5 Aug 2024 14:03:17 GMT
- Title: A Review on Organ Deformation Modeling Approaches for Reliable Surgical Navigation using Augmented Reality
- Authors: Zheng Han, Qi Dou,
- Abstract summary: Augmented Reality (AR) holds the potential to revolutionize surgical procedures by allowing surgeons to visualize critical structures within the patient's body.
Challenges arise from dynamic deformations of organs during surgery, making preoperative models inadequate for faithfully representing intraoperative anatomy.
This review seeks to enhance the understanding of organ deformation modeling in AR-guided surgery, and discuss the potential topics for future advancements.
- Score: 13.77243506924244
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Augmented Reality (AR) holds the potential to revolutionize surgical procedures by allowing surgeons to visualize critical structures within the patient's body. This is achieved through superimposing preoperative organ models onto the actual anatomy. Challenges arise from dynamic deformations of organs during surgery, making preoperative models inadequate for faithfully representing intraoperative anatomy. To enable reliable navigation in augmented surgery, modeling of intraoperative deformation to obtain an accurate alignment of the preoperative organ model with the intraoperative anatomy is indispensable. Despite the existence of various methods proposed to model intraoperative organ deformation, there are still few literature reviews that systematically categorize and summarize these approaches. This review aims to fill this gap by providing a comprehensive and technical-oriented overview of modeling methods for intraoperative organ deformation in augmented reality in surgery. Through a systematic search and screening process, 112 closely relevant papers were included in this review. By presenting the current status of organ deformation modeling methods and their clinical applications, this review seeks to enhance the understanding of organ deformation modeling in AR-guided surgery, and discuss the potential topics for future advancements.
Related papers
- Deep Learning for Surgical Instrument Recognition and Segmentation in Robotic-Assisted Surgeries: A Systematic Review [0.24342814271497581]
Applying deep learning (DL) for annotating surgical instruments in robot-assisted minimally invasive surgeries represents a significant advancement in surgical technology.
These sophisticated DL models have shown notable improvements in the precision and efficiency of detecting and segmenting surgical tools.
The application of DL in surgical education is transformative.
arXiv Detail & Related papers (2024-10-09T04:07:38Z) - A Diffusion Model for Simulation Ready Coronary Anatomy with Morpho-skeletal Control [1.053652600598537]
We investigate how Latent Diffusion Models can custom synthesize coronary anatomy for virtual intervention studies.
Our framework enables the generation and editing of coronary anatomy in a controllable manner, allowing device designers to derive mechanistic insights.
arXiv Detail & Related papers (2024-07-22T13:44:06Z) - An Endoscopic Chisel: Intraoperative Imaging Carves 3D Anatomical Models [8.516340459721484]
We propose a first vision-based approach to update the preoperative 3D anatomical model.
Results show a decrease in error during surgical progression as opposed to increasing when no update is employed.
arXiv Detail & Related papers (2024-02-19T05:06:52Z) - SAMSNeRF: Segment Anything Model (SAM) Guides Dynamic Surgical Scene
Reconstruction by Neural Radiance Field (NeRF) [4.740415113160021]
We propose a novel approach called SAMSNeRF that combines Segment Anything Model (SAM) and Neural Radiance Field (NeRF) techniques.
Our experimental results on public endoscopy surgical videos demonstrate that our approach successfully reconstructs high-fidelity dynamic surgical scenes.
arXiv Detail & Related papers (2023-08-22T20:31:00Z) - Prediction of Post-Operative Renal and Pulmonary Complications Using
Transformers [69.81176740997175]
We evaluate the performance of transformer-based models in predicting postoperative acute renal failure, pulmonary complications, and postoperative in-hospital mortality.
Our results demonstrate that transformer-based models can achieve superior performance in predicting postoperative complications and outperform traditional machine learning models.
arXiv Detail & Related papers (2023-06-01T14:08:05Z) - Safe Deep RL for Intraoperative Planning of Pedicle Screw Placement [61.28459114068828]
We propose an intraoperative planning approach for robotic spine surgery that leverages real-time observation for drill path planning based on Safe Deep Reinforcement Learning (DRL)
Our approach was capable of achieving 90% bone penetration with respect to the gold standard (GS) drill planning.
arXiv Detail & Related papers (2023-05-09T11:42:53Z) - Live image-based neurosurgical guidance and roadmap generation using
unsupervised embedding [53.992124594124896]
We present a method for live image-only guidance leveraging a large data set of annotated neurosurgical videos.
A generated roadmap encodes the common anatomical paths taken in surgeries in the training set.
We trained and evaluated the proposed method with a data set of 166 transsphenoidal adenomectomy procedures.
arXiv Detail & Related papers (2023-03-31T12:52:24Z) - Quantification of Robotic Surgeries with Vision-Based Deep Learning [45.165919577877695]
We propose a unified deep learning framework, entitled Roboformer, which operates exclusively on videos recorded during surgery.
We validated our framework on four video-based datasets of two commonly-encountered types of steps within minimally-invasive robotic surgeries.
arXiv Detail & Related papers (2022-05-06T06:08:35Z) - CholecTriplet2021: A benchmark challenge for surgical action triplet
recognition [66.51610049869393]
This paper presents CholecTriplet 2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos.
We present the challenge setup and assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge.
A total of 4 baseline methods and 19 new deep learning algorithms are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%.
arXiv Detail & Related papers (2022-04-10T18:51:55Z) - Multimodal Semantic Scene Graphs for Holistic Modeling of Surgical
Procedures [70.69948035469467]
We take advantage of the latest computer vision methodologies for generating 3D graphs from camera views.
We then introduce the Multimodal Semantic Graph Scene (MSSG) which aims at providing unified symbolic and semantic representation of surgical procedures.
arXiv Detail & Related papers (2021-06-09T14:35:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.