EyeLS: Shadow-Guided Instrument Landing System for Intraocular Target
Approaching in Robotic Eye Surgery
- URL: http://arxiv.org/abs/2311.08799v1
- Date: Wed, 15 Nov 2023 09:11:37 GMT
- Title: EyeLS: Shadow-Guided Instrument Landing System for Intraocular Target
Approaching in Robotic Eye Surgery
- Authors: Junjie Yang, Zhihao Zhao, Siyuan Shen, Daniel Zapp, Mathias Maier, Kai
Huang, Nassir Navab and M. Ali Nasseri
- Abstract summary: Robotic ophthalmic surgery is an emerging technology to facilitate high-precision interventions such as retina penetration in subretinal injection and removal of floating tissues in retinal detachment.
Current image-based methods cannot effectively estimate the needle tip's trajectory towards both retinal and floating targets.
We propose to use the shadow positions of the target and the instrument tip to estimate their relative depth position.
Our method succeeds target approaching on a retina model, and achieves an average depth error of 0.0127 mm and 0.3473 mm for floating and retinal targets respectively in the surgical simulator.
- Score: 51.05595735405451
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robotic ophthalmic surgery is an emerging technology to facilitate
high-precision interventions such as retina penetration in subretinal injection
and removal of floating tissues in retinal detachment depending on the input
imaging modalities such as microscopy and intraoperative OCT (iOCT). Although
iOCT is explored to locate the needle tip within its range-limited ROI, it is
still difficult to coordinate iOCT's motion with the needle, especially at the
initial target-approaching stage. Meanwhile, due to 2D perspective projection
and thus the loss of depth information, current image-based methods cannot
effectively estimate the needle tip's trajectory towards both retinal and
floating targets. To address this limitation, we propose to use the shadow
positions of the target and the instrument tip to estimate their relative depth
position and accordingly optimize the instrument tip's insertion trajectory
until the tip approaches targets within iOCT's scanning area. Our method
succeeds target approaching on a retina model, and achieves an average depth
error of 0.0127 mm and 0.3473 mm for floating and retinal targets respectively
in the surgical simulator without damaging the retina.
Related papers
- Monocular Microscope to CT Registration using Pose Estimation of the
Incus for Augmented Reality Cochlear Implant Surgery [3.8909273404657556]
We develop a method that permits direct 2D-to-3D registration of the view microscope video to the pre-operative Computed Tomography (CT) scan without the need for external tracking equipment.
Our results demonstrate the accuracy with an average rotation error of less than 25 degrees and a translation error of less than 2 mm, 3 mm, and 0.55% for the x, y, and z axes, respectively.
arXiv Detail & Related papers (2024-03-12T00:26:08Z) - Medical needle tip tracking based on Optical Imaging and AI [0.0]
This paper presents an innovative technology for needle tip real-time tracking, aiming for enhanced needle insertion guidance.
Specifically, our approach revolves around the creation of scattering imaging using an optical fiber-equipped needle, and uses Convolutional Neural Network (CNN) based algorithms to enable real-time estimation of the needle tip's position and orientation.
Given the average femoral arterial radius of 4 to 5mm, the proposed system is demonstrated with a great potential for precise needle guidance in femoral artery insertion procedures.
arXiv Detail & Related papers (2023-08-28T10:30:08Z) - Deep learning network to correct axial and coronal eye motion in 3D OCT
retinal imaging [65.47834983591957]
We propose deep learning based neural networks to correct axial and coronal motion artifacts in OCT based on a single scan.
The experimental result shows that the proposed method can effectively correct motion artifacts and achieve smaller error than other methods.
arXiv Detail & Related papers (2023-05-27T03:55:19Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - MTCD: Cataract Detection via Near Infrared Eye Images [69.62768493464053]
cataract is a common eye disease and one of the leading causes of blindness and vision impairment.
We present a novel algorithm for cataract detection using near-infrared eye images.
Deep learning-based eye segmentation and multitask network classification networks are presented.
arXiv Detail & Related papers (2021-10-06T08:10:28Z) - Hierarchical Deep Network with Uncertainty-aware Semi-supervised
Learning for Vessel Segmentation [58.45470500617549]
We propose a hierarchical deep network where an attention mechanism localizes the low-contrast capillary regions guided by the whole vessels.
The proposed method achieves the state-of-the-art performance in the benchmarks of both retinal artery/vein segmentation in fundus images and liver portal/hepatic vessel segmentation in CT images.
arXiv Detail & Related papers (2021-05-31T06:55:43Z) - An Interpretable Multiple-Instance Approach for the Detection of
referable Diabetic Retinopathy from Fundus Images [72.94446225783697]
We propose a machine learning system for the detection of referable Diabetic Retinopathy in fundus images.
By extracting local information from image patches and combining it efficiently through an attention mechanism, our system is able to achieve high classification accuracy.
We evaluate our approach on publicly available retinal image datasets, in which it exhibits near state-of-the-art performance.
arXiv Detail & Related papers (2021-03-02T13:14:15Z) - Monocular Retinal Depth Estimation and Joint Optic Disc and Cup
Segmentation using Adversarial Networks [18.188041599999547]
We propose a novel method using adversarial network to predict depth map from a single image.
We obtain a very high average correlation coefficient of 0.92 upon five fold cross validation.
We then use the depth estimation process as a proxy task for joint optic disc and cup segmentation.
arXiv Detail & Related papers (2020-07-15T06:21:46Z) - Towards Augmented Reality-based Suturing in Monocular Laparoscopic
Training [0.5707453684578819]
The paper proposes an Augmented Reality environment with quantitative and qualitative visual representations to enhance laparoscopic training outcomes performed on a silicone pad.
This is enabled by a multi-task supervised deep neural network which performs multi-class segmentation and depth map prediction.
The network achieves a dice score of 0.67 for surgical needle segmentation, 0.81 for needle holder instrument segmentation and a mean absolute error of 6.5 mm for depth estimation.
arXiv Detail & Related papers (2020-01-19T19:59:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.