Machine Vision-Based Surgical Lighting System:Design and Implementation
- URL: http://arxiv.org/abs/2510.17287v1
- Date: Mon, 20 Oct 2025 08:22:45 GMT
- Title: Machine Vision-Based Surgical Lighting System:Design and Implementation
- Authors: Amir Gharghabi, Mahdi Hakiminezhad, Maryam Shafaei, Shaghayegh Gharghabi,
- Abstract summary: The YOLO model achieves 96.7% mAP@50 on the validation set consisting of annotated images simulating surgical scenes.<n>This machine vision-based solution reduces physical strain on surgeons, improves consistency in illumination, and supports improved surgical outcomes.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Effortless and ergonomically designed surgical lighting is critical for precision and safety during procedures. However, traditional systems often rely on manual adjustments, leading to surgeon fatigue, neck strain, and inconsistent illumination due to drift and shadowing. To address these challenges, we propose a novel surgical lighting system that leverages the YOLOv11 object detection algorithm to identify a blue marker placed above the target surgical site. A high-power LED light source is then directed to the identified location using two servomotors equipped with tilt-pan brackets. The YOLO model achieves 96.7% mAP@50 on the validation set consisting of annotated images simulating surgical scenes with the blue spherical marker. By automating the lighting process, this machine vision-based solution reduces physical strain on surgeons, improves consistency in illumination, and supports improved surgical outcomes.
Related papers
- YOLOv10-Based Multi-Task Framework for Hand Localization and Laterality Classification in Surgical Videos [5.504955093712013]
We propose a framework that simultaneously localizes hands and classifies their laterality in complex surgical scenes.<n>The model is trained on the Trauma THOMPSON Challenge 2025 Task 2 dataset.
arXiv Detail & Related papers (2026-02-21T21:41:56Z) - Evaluation of facial landmark localization performance in a surgical setting [0.0]
Many face detection algorithms have found applications in neurosurgery, ophthalmology, and plastic surgery.<n>A common challenge in using these algorithms is variable lighting conditions and the flexibility of detection positions to identify and precisely localize patients.<n>The proposed experiment tests the MediaPipe algorithm for detecting facial landmarks in a controlled setting, using a robotic arm that automatically adjusts positions while the surgical light and the phantom remain in a fixed position.
arXiv Detail & Related papers (2025-07-24T09:40:47Z) - Stable Tracking of Eye Gaze Direction During Ophthalmic Surgery [6.887111029199582]
This study proposes an innovative eye localization and tracking method that combines machine learning with traditional algorithms.<n>Our proposed method has an average estimation error of 0.58 degrees for eye orientation estimation and 2.08-degree average control error for the robotic arm's movement.
arXiv Detail & Related papers (2025-07-01T10:28:40Z) - Benchmarking Laparoscopic Surgical Image Restoration and Beyond [54.28852320829451]
In laparoscopic surgery, a clear and high-quality visual field is critical for surgeons to make accurate decisions.<n> persistent visual degradation, including smoke generated by energy devices, lens fogging from thermal gradients, and lens contamination pose risks to patient safety.<n>We introduce a real-world open-source surgical image restoration dataset covering laparoscopic environments, called SurgClean.
arXiv Detail & Related papers (2025-05-25T14:17:56Z) - Deep intra-operative illumination calibration of hyperspectral cameras [73.08443963791343]
Hyperspectral imaging (HSI) is emerging as a promising novel imaging modality with various potential surgical applications.
We show that dynamically changing lighting conditions in the operating room dramatically affect the performance of HSI applications.
We propose a novel learning-based approach to automatically recalibrating hyperspectral images during surgery.
arXiv Detail & Related papers (2024-09-11T08:30:03Z) - Robotic Constrained Imitation Learning for the Peg Transfer Task in Fundamentals of Laparoscopic Surgery [18.64205729932939]
We present an implementation strategy for a robot that performs peg transfer tasks in Fundamentals of Laparoscopic Surgery (FLS) via imitation learning.
In this study, we achieve more accurate imitation learning with only monocular images.
We implemented an overall system using two Franka Emika Panda Robot Arms and validated its effectiveness.
arXiv Detail & Related papers (2024-05-06T13:12:25Z) - Neural LerPlane Representations for Fast 4D Reconstruction of Deformable
Tissues [52.886545681833596]
LerPlane is a novel method for fast and accurate reconstruction of surgical scenes under a single-viewpoint setting.
LerPlane treats surgical procedures as 4D volumes and factorizes them into explicit 2D planes of static and dynamic fields.
LerPlane shares static fields, significantly reducing the workload of dynamic tissue modeling.
arXiv Detail & Related papers (2023-05-31T14:38:35Z) - Next-generation Surgical Navigation: Marker-less Multi-view 6DoF Pose Estimation of Surgical Instruments [64.59698930334012]
We present a multi-camera capture setup consisting of static and head-mounted cameras.<n>Second, we publish a multi-view RGB-D video dataset of ex-vivo spine surgeries, captured in a surgical wet lab and a real operating theatre.<n>Third, we evaluate three state-of-the-art single-view and multi-view methods for the task of 6DoF pose estimation of surgical instruments.
arXiv Detail & Related papers (2023-05-05T13:42:19Z) - EasyHeC: Accurate and Automatic Hand-eye Calibration via Differentiable
Rendering and Space Exploration [49.90228618894857]
We introduce a new approach to hand-eye calibration called EasyHeC, which is markerless, white-box, and delivers superior accuracy and robustness.
We propose to use two key technologies: differentiable rendering-based camera pose optimization and consistency-based joint space exploration.
Our evaluation demonstrates superior performance in synthetic and real-world datasets.
arXiv Detail & Related papers (2023-05-02T03:49:54Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - A Temporal Learning Approach to Inpainting Endoscopic Specularities and
Its effect on Image Correspondence [13.25903945009516]
We propose using a temporal generative adversarial network (GAN) to inpaint the hidden anatomy under specularities.
This is achieved using in-vivo data of gastric endoscopy (Hyper-Kvasir) in a fully unsupervised manner.
We also assess the effect of our method in computer vision tasks that underpin 3D reconstruction and camera motion estimation.
arXiv Detail & Related papers (2022-03-31T13:14:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.