Dual-Modality Computational Ophthalmic Imaging with Deep Learning and Coaxial Optical Design
- URL: http://arxiv.org/abs/2504.18549v1
- Date: Sun, 13 Apr 2025 05:35:17 GMT
- Title: Dual-Modality Computational Ophthalmic Imaging with Deep Learning and Coaxial Optical Design
- Authors: Boyuan Peng, Jiaju Chen, Yiwei Zhang, Cuiyi Peng, Junyang Li, Jiaming Deng, Peiwu Qin,
- Abstract summary: This study presents a compact, dual-function optical device that integrates fundus photography and refractive error detection into a unified platform.<n>The proposed framework offers a promising solution for rapid, intelligent, and scalable ophthalmic screening, particularly suitable for community health settings.
- Score: 6.979642495062275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The growing burden of myopia and retinal diseases necessitates more accessible and efficient eye screening solutions. This study presents a compact, dual-function optical device that integrates fundus photography and refractive error detection into a unified platform. The system features a coaxial optical design using dichroic mirrors to separate wavelength-dependent imaging paths, enabling simultaneous alignment of fundus and refraction modules. A Dense-U-Net-based algorithm with customized loss functions is employed for accurate pupil segmentation, facilitating automated alignment and focusing. Experimental evaluations demonstrate the system's capability to achieve high-precision pupil localization (EDE = 2.8 px, mIoU = 0.931) and reliable refractive estimation with a mean absolute error below 5%. Despite limitations due to commercial lens components, the proposed framework offers a promising solution for rapid, intelligent, and scalable ophthalmic screening, particularly suitable for community health settings.
Related papers
- Deep Learning for Optical Misalignment Diagnostics in Multi-Lens Imaging Systems [0.0]
We present two complementary deep learning-based inverse-design methods for diagnosing misalignments in multi-element lens systems.<n>First, we use ray-traced spot diagrams to predict five-degree-of-freedom (5-DOF) errors in a 6-lens photographic prime, achieving a mean absolute error of 0.031mm in lateral translation and 0.011$circ$ in tilt.<n>We also introduce a physics-based simulation pipeline that utilizes grayscale synthetic camera images, enabling a deep learning model to estimate 4-DOF, decenter and tilt errors in both two- and six-lens multi-lens systems.
arXiv Detail & Related papers (2025-06-29T10:13:40Z) - Automated Segmentation and Analysis of Cone Photoreceptors in Multimodal Adaptive Optics Imaging [3.7243418909643093]
We used confocal and non-confocal split detector images to analyze photoreceptors for improved accuracy.
We explored two U-Net-based segmentation models: StarDist for confocal and Cellpose for calculated modalities.
arXiv Detail & Related papers (2024-10-19T17:10:38Z) - Spatial-aware Transformer-GRU Framework for Enhanced Glaucoma Diagnosis
from 3D OCT Imaging [1.8416014644193066]
We present a novel deep learning framework that leverages the diagnostic value of 3D Optical Coherence Tomography ( OCT) imaging for automated glaucoma detection.
We integrate a pre-trained Vision Transformer on retinal data for rich slice-wise feature extraction and a bidirectional Gated Recurrent Unit for capturing inter-slice spatial dependencies.
Experimental results on a large dataset demonstrate the superior performance of the proposed method over state-of-the-art ones.
arXiv Detail & Related papers (2024-03-08T22:25:15Z) - EyeLS: Shadow-Guided Instrument Landing System for Intraocular Target
Approaching in Robotic Eye Surgery [51.05595735405451]
Robotic ophthalmic surgery is an emerging technology to facilitate high-precision interventions such as retina penetration in subretinal injection and removal of floating tissues in retinal detachment.
Current image-based methods cannot effectively estimate the needle tip's trajectory towards both retinal and floating targets.
We propose to use the shadow positions of the target and the instrument tip to estimate their relative depth position.
Our method succeeds target approaching on a retina model, and achieves an average depth error of 0.0127 mm and 0.3473 mm for floating and retinal targets respectively in the surgical simulator.
arXiv Detail & Related papers (2023-11-15T09:11:37Z) - Revealing the preference for correcting separated aberrations in joint
optic-image design [19.852225245159598]
We characterize the optics with separated aberrations to achieve efficient joint design of complex systems such as smartphones and drones.
An image simulation system is presented to reproduce the genuine imaging procedure of lenses with large field-of-views.
Experiments reveal that the preference for correcting separated aberrations in joint design is as follows: longitudinal chromatic aberration, lateral chromatic aberration, spherical aberration, field curvature, and coma, with astigmatism coming last.
arXiv Detail & Related papers (2023-09-08T14:12:03Z) - Deep learning network to correct axial and coronal eye motion in 3D OCT
retinal imaging [65.47834983591957]
We propose deep learning based neural networks to correct axial and coronal motion artifacts in OCT based on a single scan.
The experimental result shows that the proposed method can effectively correct motion artifacts and achieve smaller error than other methods.
arXiv Detail & Related papers (2023-05-27T03:55:19Z) - Meta-lenses for differential imaging based on weak measurement [3.3944759178279424]
We propose and demonstrate experimentally three meta-lenses for differential imaging employing the framework of weak measurement.
Based on Fresnel-lens-like structures, our meta-lenses incorporated the previous weak-measurement compartment into wavelength scale.
In addition to its potential importance in heavily integrated all-optical neural networks, the differential lens can be easily incorporated in the existing imaging systems.
arXiv Detail & Related papers (2023-04-06T02:20:08Z) - A parameter refinement method for Ptychography based on Deep Learning
concepts [55.41644538483948]
coarse parametrisation in propagation distance, position errors and partial coherence frequently menaces the experiment viability.
A modern Deep Learning framework is used to correct autonomously the setup incoherences, thus improving the quality of a ptychography reconstruction.
We tested our system on both synthetic datasets and also on real data acquired at the TwinMic beamline of the Elettra synchrotron facility.
arXiv Detail & Related papers (2021-05-18T10:15:17Z) - Universal and Flexible Optical Aberration Correction Using Deep-Prior
Based Deconvolution [51.274657266928315]
We propose a PSF aware plug-and-play deep network, which takes the aberrant image and PSF map as input and produces the latent high quality version via incorporating lens-specific deep priors.
Specifically, we pre-train a base model from a set of diverse lenses and then adapt it to a given lens by quickly refining the parameters.
arXiv Detail & Related papers (2021-04-07T12:00:38Z) - Single-shot Hyperspectral-Depth Imaging with Learned Diffractive Optics [72.9038524082252]
We propose a compact single-shot monocular hyperspectral-depth (HS-D) imaging method.
Our method uses a diffractive optical element (DOE), the point spread function of which changes with respect to both depth and spectrum.
To facilitate learning the DOE, we present a first HS-D dataset by building a benchtop HS-D imager.
arXiv Detail & Related papers (2020-09-01T14:19:35Z) - Modeling and Enhancing Low-quality Retinal Fundus Images [167.02325845822276]
Low-quality fundus images increase uncertainty in clinical observation and lead to the risk of misdiagnosis.
We propose a clinically oriented fundus enhancement network (cofe-Net) to suppress global degradation factors.
Experiments on both synthetic and real images demonstrate that our algorithm effectively corrects low-quality fundus images without losing retinal details.
arXiv Detail & Related papers (2020-05-12T08:01:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.