Stable Tracking of Eye Gaze Direction During Ophthalmic Surgery
- URL: http://arxiv.org/abs/2507.00635v1
- Date: Tue, 01 Jul 2025 10:28:40 GMT
- Title: Stable Tracking of Eye Gaze Direction During Ophthalmic Surgery
- Authors: Tinghe Hong, Shenlin Cai, Boyang Li, Kai Huang,
- Abstract summary: This study proposes an innovative eye localization and tracking method that combines machine learning with traditional algorithms.<n>Our proposed method has an average estimation error of 0.58 degrees for eye orientation estimation and 2.08-degree average control error for the robotic arm's movement.
- Score: 6.887111029199582
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Ophthalmic surgical robots offer superior stability and precision by reducing the natural hand tremors of human surgeons, enabling delicate operations in confined surgical spaces. Despite the advancements in developing vision- and force-based control methods for surgical robots, preoperative navigation remains heavily reliant on manual operation, limiting the consistency and increasing the uncertainty. Existing eye gaze estimation techniques in the surgery, whether traditional or deep learning-based, face challenges including dependence on additional sensors, occlusion issues in surgical environments, and the requirement for facial detection. To address these limitations, this study proposes an innovative eye localization and tracking method that combines machine learning with traditional algorithms, eliminating the requirements of landmarks and maintaining stable iris detection and gaze estimation under varying lighting and shadow conditions. Extensive real-world experiment results show that our proposed method has an average estimation error of 0.58 degrees for eye orientation estimation and 2.08-degree average control error for the robotic arm's movement based on the calculated orientation.
Related papers
- Benchmarking Laparoscopic Surgical Image Restoration and Beyond [54.28852320829451]
In laparoscopic surgery, a clear and high-quality visual field is critical for surgeons to make accurate decisions.<n> persistent visual degradation, including smoke generated by energy devices, lens fogging from thermal gradients, and lens contamination pose risks to patient safety.<n>We introduce a real-world open-source surgical image restoration dataset covering laparoscopic environments, called SurgClean.
arXiv Detail & Related papers (2025-05-25T14:17:56Z) - SurgRIPE challenge: Benchmark of Surgical Robot Instrument Pose Estimation [33.39658645724101]
Vision-based methods for surgical instrument pose estimation provide a practical approach to tool tracking, but they often require markers to be attached to the instruments.<n>Recently, more research has focused on the development of marker-less methods based on deep learning.<n>We introduce the Surgical Robot Instrument Pose Estimation (SurgRIPE) challenge, hosted at the 26th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) in 2023.<n>The SurgRIPE challenge has successfully established a new benchmark for the field, encouraging further research and development in surgical robot instrument pose estimation.
arXiv Detail & Related papers (2025-01-06T13:02:44Z) - Deep intra-operative illumination calibration of hyperspectral cameras [73.08443963791343]
Hyperspectral imaging (HSI) is emerging as a promising novel imaging modality with various potential surgical applications.
We show that dynamically changing lighting conditions in the operating room dramatically affect the performance of HSI applications.
We propose a novel learning-based approach to automatically recalibrating hyperspectral images during surgery.
arXiv Detail & Related papers (2024-09-11T08:30:03Z) - Robotic Constrained Imitation Learning for the Peg Transfer Task in Fundamentals of Laparoscopic Surgery [18.64205729932939]
We present an implementation strategy for a robot that performs peg transfer tasks in Fundamentals of Laparoscopic Surgery (FLS) via imitation learning.
In this study, we achieve more accurate imitation learning with only monocular images.
We implemented an overall system using two Franka Emika Panda Robot Arms and validated its effectiveness.
arXiv Detail & Related papers (2024-05-06T13:12:25Z) - Redefining the Laparoscopic Spatial Sense: AI-based Intra- and
Postoperative Measurement from Stereoimages [3.2039076408339353]
We develop a novel human-AI-based method for laparoscopic measurements utilizing stereo vision.
Based on a holistic qualitative requirements analysis, this work proposes a comprehensive measurement method.
Our results outline the potential of our method achieving high accuracies in distance measurements with errors below 1 mm.
arXiv Detail & Related papers (2023-11-16T10:19:04Z) - EyeLS: Shadow-Guided Instrument Landing System for Intraocular Target
Approaching in Robotic Eye Surgery [51.05595735405451]
Robotic ophthalmic surgery is an emerging technology to facilitate high-precision interventions such as retina penetration in subretinal injection and removal of floating tissues in retinal detachment.
Current image-based methods cannot effectively estimate the needle tip's trajectory towards both retinal and floating targets.
We propose to use the shadow positions of the target and the instrument tip to estimate their relative depth position.
Our method succeeds target approaching on a retina model, and achieves an average depth error of 0.0127 mm and 0.3473 mm for floating and retinal targets respectively in the surgical simulator.
arXiv Detail & Related papers (2023-11-15T09:11:37Z) - End-to-End assessment of AR-assisted neurosurgery systems [0.5892638927736115]
We classify different techniques for assessing an AR-assisted neurosurgery system and propose a new technique to systematize the assessment procedure.
We found that although the system can undergo registration and tracking errors, physical feedback can significantly reduce the error caused by hologram displacement.
The lack of visual feedback on the hologram does not have a significant effect on the user 3D perception.
arXiv Detail & Related papers (2023-11-03T13:41:44Z) - Safe Deep RL for Intraoperative Planning of Pedicle Screw Placement [61.28459114068828]
We propose an intraoperative planning approach for robotic spine surgery that leverages real-time observation for drill path planning based on Safe Deep Reinforcement Learning (DRL)
Our approach was capable of achieving 90% bone penetration with respect to the gold standard (GS) drill planning.
arXiv Detail & Related papers (2023-05-09T11:42:53Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - Dissecting Self-Supervised Learning Methods for Surgical Computer Vision [51.370873913181605]
Self-Supervised Learning (SSL) methods have begun to gain traction in the general computer vision community.
The effectiveness of SSL methods in more complex and impactful domains, such as medicine and surgery, remains limited and unexplored.
We present an extensive analysis of the performance of these methods on the Cholec80 dataset for two fundamental and popular tasks in surgical context understanding, phase recognition and tool presence detection.
arXiv Detail & Related papers (2022-07-01T14:17:11Z) - Towards Augmented Reality-based Suturing in Monocular Laparoscopic
Training [0.5707453684578819]
The paper proposes an Augmented Reality environment with quantitative and qualitative visual representations to enhance laparoscopic training outcomes performed on a silicone pad.
This is enabled by a multi-task supervised deep neural network which performs multi-class segmentation and depth map prediction.
The network achieves a dice score of 0.67 for surgical needle segmentation, 0.81 for needle holder instrument segmentation and a mean absolute error of 6.5 mm for depth estimation.
arXiv Detail & Related papers (2020-01-19T19:59:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.