Deep Reinforcement Learning Based System for Intraoperative
Hyperspectral Video Autofocusing
- URL: http://arxiv.org/abs/2307.11638v1
- Date: Fri, 21 Jul 2023 15:04:21 GMT
- Title: Deep Reinforcement Learning Based System for Intraoperative
Hyperspectral Video Autofocusing
- Authors: Charlie Budd, Jianrong Qiu, Oscar MacCormac, Martin Huber, Christopher
Mower, Mirek Janatka, Th\'eo Trotouin, Jonathan Shapey, Mads S. Bergholt and
Tom Vercauteren
- Abstract summary: This work integrates a focus-tunable liquid lens into a video HSI exoscope.
A first-of-its-kind robotic focal-time scan was performed to create a realistic and reproducible testing dataset.
- Score: 2.476200036182773
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hyperspectral imaging (HSI) captures a greater level of spectral detail than
traditional optical imaging, making it a potentially valuable intraoperative
tool when precise tissue differentiation is essential. Hardware limitations of
current optical systems used for handheld real-time video HSI result in a
limited focal depth, thereby posing usability issues for integration of the
technology into the operating room. This work integrates a focus-tunable liquid
lens into a video HSI exoscope, and proposes novel video autofocusing methods
based on deep reinforcement learning. A first-of-its-kind robotic focal-time
scan was performed to create a realistic and reproducible testing dataset. We
benchmarked our proposed autofocus algorithm against traditional policies, and
found our novel approach to perform significantly ($p<0.05$) better than
traditional techniques ($0.070\pm.098$ mean absolute focal error compared to
$0.146\pm.148$). In addition, we performed a blinded usability trial by having
two neurosurgeons compare the system with different autofocus policies, and
found our novel approach to be the most favourable, making our system a
desirable addition for intraoperative HSI.
Related papers
- OPTIAGENT: A Physics-Driven Agentic Framework for Automated Optical Design [9.777936085725033]
Optical design is the process of configuring optical elements to precisely manipulate light for high-fidelity imaging.<n>This work represents the first attempt to bridge the gap between large language models and formal optical design algorithms.<n>Our model integrates with specialized optical optimization routines for end-to-end fine-tuning and precision refinement.
arXiv Detail & Related papers (2026-02-27T07:38:31Z) - DeepAf: One-Shot Spatiospectral Auto-Focus Model for Digital Pathology [37.648157065553185]
We introduce a novel automated microscopic system powered by DeepAf.<n>DeepAf combines spatial and spectral features through a hybrid architecture for single-shot focus prediction.<n>System achieves 0.90 AUC in cancer classification at 4x magnification, a significant achievement at lower magnification than typical 20x WSI scans.
arXiv Detail & Related papers (2025-10-06T19:28:08Z) - DeepEyeNet: Adaptive Genetic Bayesian Algorithm Based Hybrid ConvNeXtTiny Framework For Multi-Feature Glaucoma Eye Diagnosis [0.0]
Glaucoma is a leading cause of irreversible blindness worldwide.
We present DeepEyeNet, a framework for automated glaucoma detection using retinal fundus images.
arXiv Detail & Related papers (2025-01-19T21:03:36Z) - Deep intra-operative illumination calibration of hyperspectral cameras [73.08443963791343]
Hyperspectral imaging (HSI) is emerging as a promising novel imaging modality with various potential surgical applications.
We show that dynamically changing lighting conditions in the operating room dramatically affect the performance of HSI applications.
We propose a novel learning-based approach to automatically recalibrating hyperspectral images during surgery.
arXiv Detail & Related papers (2024-09-11T08:30:03Z) - Focal Depth Estimation: A Calibration-Free, Subject- and Daytime Invariant Approach [0.5026434955540995]
This study introduces a groundbreaking calibration-free method for estimating focal depth.
We leverage machine learning techniques to analyze eye movement features within short sequences.
Our approach achieves a mean absolute error (MAE) of less than 10 cm, setting a new focal depth estimation accuracy standard.
arXiv Detail & Related papers (2024-08-07T07:09:14Z) - Exploring Quasi-Global Solutions to Compound Lens Based Computational Imaging Systems [15.976326291076377]
We present Quasi-Global Search Optics (QGSO) to automatically design compound lens based computational imaging systems.
QGSO serves as a transformative end-to-end lens design paradigm for superior global search ability.
arXiv Detail & Related papers (2024-04-30T01:59:25Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - AOSLO-net: A deep learning-based method for automatic segmentation of
retinal microaneurysms from adaptive optics scanning laser ophthalmoscope
images [3.8848390007421196]
We introduce AOSLO-net, a deep neural network framework with customized training policy, to automatically segment MAs from AOSLO images.
We evaluate the performance of AOSLO-net using 87 DR AOSLO images demonstrating very accurate MA detection and segmentation.
arXiv Detail & Related papers (2021-06-05T05:06:36Z) - A parameter refinement method for Ptychography based on Deep Learning
concepts [55.41644538483948]
coarse parametrisation in propagation distance, position errors and partial coherence frequently menaces the experiment viability.
A modern Deep Learning framework is used to correct autonomously the setup incoherences, thus improving the quality of a ptychography reconstruction.
We tested our system on both synthetic datasets and also on real data acquired at the TwinMic beamline of the Elettra synchrotron facility.
arXiv Detail & Related papers (2021-05-18T10:15:17Z) - Universal and Flexible Optical Aberration Correction Using Deep-Prior
Based Deconvolution [51.274657266928315]
We propose a PSF aware plug-and-play deep network, which takes the aberrant image and PSF map as input and produces the latent high quality version via incorporating lens-specific deep priors.
Specifically, we pre-train a base model from a set of diverse lenses and then adapt it to a given lens by quickly refining the parameters.
arXiv Detail & Related papers (2021-04-07T12:00:38Z) - Real-Time, Deep Synthetic Aperture Sonar (SAS) Autofocus [34.77467193499518]
Synthetic aperture sonar (SAS) requires precise time-of-flight measurements of the transmitted/received waveform to produce well-focused imagery.
To overcome this, an emphautofocus algorithm is employed as a post-processing step after image reconstruction to improve image focus.
We propose a deep learning technique to overcome these limitations and implicitly learn the weighting function in a data-driven manner.
arXiv Detail & Related papers (2021-03-18T15:16:29Z) - Deep Autofocus for Synthetic Aperture Sonar [28.306713374371814]
In this letter, we demonstrate the potential of machine learning, specifically deep learning, to address the autofocus problem.
We formulate the problem as a self-supervised, phase error estimation task using a deep network we call Deep Autofocus.
Our results demonstrate Deep Autofocus can produce imagery that is perceptually as good as benchmark iterative techniques but at a substantially lower computational cost.
arXiv Detail & Related papers (2020-10-29T15:31:15Z) - Single-shot Hyperspectral-Depth Imaging with Learned Diffractive Optics [72.9038524082252]
We propose a compact single-shot monocular hyperspectral-depth (HS-D) imaging method.
Our method uses a diffractive optical element (DOE), the point spread function of which changes with respect to both depth and spectrum.
To facilitate learning the DOE, we present a first HS-D dataset by building a benchtop HS-D imager.
arXiv Detail & Related papers (2020-09-01T14:19:35Z) - Rapid Whole Slide Imaging via Learning-based Two-shot Virtual
Autofocusing [57.90239401665367]
Whole slide imaging (WSI) is an emerging technology for digital pathology.
We propose the concept of textitvirtual autofocusing, which does not rely on mechanical adjustment to conduct refocusing.
arXiv Detail & Related papers (2020-03-14T13:40:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.