Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing
- URL: http://arxiv.org/abs/2301.07204v1
- Date: Tue, 17 Jan 2023 21:41:21 GMT
- Title: Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing
- Authors: Shervin Dehghani, Michael Sommersperger, Peiyao Zhang, Alejandro
Martin-Gomez, Benjamin Busam, Peter Gehlbach, Nassir Navab, M. Ali Nasseri
and Iulian Iordachita
- Abstract summary: We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
- Score: 88.99939660183881
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In the last decade, various robotic platforms have been introduced that could
support delicate retinal surgeries. Concurrently, to provide semantic
understanding of the surgical area, recent advances have enabled
microscope-integrated intraoperative Optical Coherent Tomography (iOCT) with
high-resolution 3D imaging at near video rate. The combination of robotics and
semantic understanding enables task autonomy in robotic retinal surgery, such
as for subretinal injection. This procedure requires precise needle insertion
for best treatment outcomes. However, merging robotic systems with iOCT
introduces new challenges. These include, but are not limited to high demands
on data processing rates and dynamic registration of these systems during the
procedure. In this work, we propose a framework for autonomous robotic
navigation for subretinal injection, based on intelligent real-time processing
of iOCT volumes. Our method consists of an instrument pose estimation method,
an online registration between the robotic and the iOCT system, and trajectory
planning tailored for navigation to an injection target. We also introduce
intelligent virtual B-scans, a volume slicing approach for rapid instrument
pose estimation, which is enabled by Convolutional Neural Networks (CNNs). Our
experiments on ex-vivo porcine eyes demonstrate the precision and repeatability
of the method. Finally, we discuss identified challenges in this work and
suggest potential solutions to further the development of such systems.
Related papers
- Transforming Surgical Interventions with Embodied Intelligence for Ultrasound Robotics [24.014073238400137]
This paper introduces a novel Ultrasound Embodied Intelligence system that combines ultrasound robots with large language models (LLMs) and domain-specific knowledge augmentation.
Our approach employs a dual strategy: firstly, integrating LLMs with ultrasound robots to interpret doctors' verbal instructions into precise motion planning.
Our findings suggest that the proposed system improves the efficiency and quality of ultrasound scans and paves the way for further advancements in autonomous medical scanning technologies.
arXiv Detail & Related papers (2024-06-18T14:22:16Z) - Creating a Digital Twin of Spinal Surgery: A Proof of Concept [68.37190859183663]
Surgery digitalization is the process of creating a virtual replica of real-world surgery.
We present a proof of concept (PoC) for surgery digitalization that is applied to an ex-vivo spinal surgery.
We employ five RGB-D cameras for dynamic 3D reconstruction of the surgeon, a high-end camera for 3D reconstruction of the anatomy, an infrared stereo camera for surgical instrument tracking, and a laser scanner for 3D reconstruction of the operating room and data fusion.
arXiv Detail & Related papers (2024-03-25T13:09:40Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - Automating Catheterization Labs with Real-Time Perception [31.65246126754449]
AutoCBCT is a visual perception system seamlessly integrated with an angiography suite.
It enables a novel workflow with automated positioning, navigation and simulated test-runs, eliminating the need for manual operations and interactions.
The proposed system has been successfully deployed and studied in both lab and clinical settings, demonstrating significantly improved workflow efficiency.
arXiv Detail & Related papers (2024-03-09T02:05:23Z) - GLSFormer : Gated - Long, Short Sequence Transformer for Step
Recognition in Surgical Videos [57.93194315839009]
We propose a vision transformer-based approach to learn temporal features directly from sequence-level patches.
We extensively evaluate our approach on two cataract surgery video datasets, Cataract-101 and D99, and demonstrate superior performance compared to various state-of-the-art methods.
arXiv Detail & Related papers (2023-07-20T17:57:04Z) - Explainable Artificial Intelligence in Retinal Imaging for the detection
of Systemic Diseases [0.0]
This study aims to evaluate an explainable staged grading process without using deep Convolutional Neural Networks (CNNs) directly.
We have proposed a clinician-in-the-loop assisted intelligent workflow that performs a retinal vascular assessment on the fundus images.
The semiautomatic methodology aims to have a federated approach to AI in healthcare applications with more inputs and interpretations from clinicians.
arXiv Detail & Related papers (2022-12-14T07:00:31Z) - Towards Autonomous Atlas-based Ultrasound Acquisitions in Presence of
Articulated Motion [48.52403516006036]
This paper proposes a vision-based approach allowing autonomous robotic US limb scanning.
To this end, an atlas MRI template of a human arm with annotated vascular structures is used to generate trajectories.
In all cases, the system can successfully acquire the planned vascular structure on volunteers' limbs.
arXiv Detail & Related papers (2022-08-10T15:39:20Z) - Autonomous Intraluminal Navigation of a Soft Robot using
Deep-Learning-based Visual Servoing [13.268863900187025]
We present a synergic solution for intraluminal navigation consisting of a 3D printed endoscopic soft robot.
Visual servoing, based on Convolutional Neural Networks (CNNs), is used to achieve the autonomous navigation task.
The proposed robot is validated in anatomical phantoms in different path configurations.
arXiv Detail & Related papers (2022-07-01T13:17:45Z) - Relational Graph Learning on Visual and Kinematics Embeddings for
Accurate Gesture Recognition in Robotic Surgery [84.73764603474413]
We propose a novel online approach of multi-modal graph network (i.e., MRG-Net) to dynamically integrate visual and kinematics information.
The effectiveness of our method is demonstrated with state-of-the-art results on the public JIGSAWS dataset.
arXiv Detail & Related papers (2020-11-03T11:00:10Z) - Real-Time Instrument Segmentation in Robotic Surgery using Auxiliary
Supervised Deep Adversarial Learning [15.490603884631764]
Real-time semantic segmentation of the robotic instruments and tissues is a crucial step in robot-assisted surgery.
We have developed a light-weight cascaded convolutional neural network (CNN) to segment the surgical instruments from high-resolution videos.
We show that our model surpasses existing algorithms for pixel-wise segmentation of surgical instruments in both prediction accuracy and segmentation time of high-resolution videos.
arXiv Detail & Related papers (2020-07-22T10:16:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.