Direct Zernike Coefficient Prediction from Point Spread Functions and Extended Images using Deep Learning
- URL: http://arxiv.org/abs/2404.15231v2
- Date: Wed, 24 Apr 2024 15:23:47 GMT
- Title: Direct Zernike Coefficient Prediction from Point Spread Functions and Extended Images using Deep Learning
- Authors: Yong En Kok, Alexander Bentley, Andrew Parkes, Amanda J. Wright, Michael G. Somekh, Michael Pound,
- Abstract summary: Existing adaptive optics systems rely on iterative search algorithm to correct for aberrations and improve images.
This study demonstrates the application of convolutional neural networks to characterise the optical aberration.
- Score: 36.136619420474766
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optical imaging quality can be severely degraded by system and sample induced aberrations. Existing adaptive optics systems typically rely on iterative search algorithm to correct for aberrations and improve images. This study demonstrates the application of convolutional neural networks to characterise the optical aberration by directly predicting the Zernike coefficients from two to three phase-diverse optical images. We evaluated our network on 600,000 simulated Point Spread Function (PSF) datasets randomly generated within the range of -1 to 1 radians using the first 25 Zernike coefficients. The results show that using only three phase-diverse images captured above, below and at the focal plane with an amplitude of 1 achieves a low RMSE of 0.10 radians on the simulated PSF dataset. Furthermore, this approach directly predicts Zernike modes simulated extended 2D samples, while maintaining a comparable RMSE of 0.15 radians. We demonstrate that this approach is effective using only a single prediction step, or can be iterated a small number of times. This simple and straightforward technique provides rapid and accurate method for predicting the aberration correction using three or less phase-diverse images, paving the way for evaluation on real-world dataset.
Related papers
- bit2bit: 1-bit quanta video reconstruction via self-supervised photon prediction [57.199618102578576]
We propose bit2bit, a new method for reconstructing high-quality image stacks at original resolution from sparse binary quantatemporal image data.
Inspired by recent work on Poisson denoising, we developed an algorithm that creates a dense image sequence from sparse binary photon data.
We present a novel dataset containing a wide range of real SPAD high-speed videos under various challenging imaging conditions.
arXiv Detail & Related papers (2024-10-30T17:30:35Z) - ExposureDiffusion: Learning to Expose for Low-light Image Enhancement [87.08496758469835]
This work addresses the issue by seamlessly integrating a diffusion model with a physics-based exposure model.
Our method obtains significantly improved performance and reduced inference time compared with vanilla diffusion models.
The proposed framework can work with both real-paired datasets, SOTA noise models, and different backbone networks.
arXiv Detail & Related papers (2023-07-15T04:48:35Z) - Poisson-Gaussian Holographic Phase Retrieval with Score-based Image
Prior [19.231581775644617]
We propose a new algorithm called "AWFS" that uses the accelerated Wirtinger flow (AWF) with a score function as generative prior.
We calculate the gradient of the log-likelihood function for PR and determine the Lipschitz constant.
We provide theoretical analysis that establishes a critical-point convergence guarantee for the proposed algorithm.
arXiv Detail & Related papers (2023-05-12T18:08:47Z) - Weighted Encoding Optimization for Dynamic Single-pixel Imaging and
Sensing [5.009136541766621]
We report a weighted optimization technique for dynamic rate-adaptive single-pixel imaging and sensing.
Experiments on the MNIST dataset validated that once the network is trained with a sampling rate of 1, the average imaging PSNR reaches 23.50 dB at 0.1 sampling rate.
arXiv Detail & Related papers (2022-01-08T14:11:22Z) - Deep Domain Adversarial Adaptation for Photon-efficient Imaging Based on
Spatiotemporal Inception Network [11.58898808789911]
In single-photon LiDAR, photon-efficient imaging captures the 3D structure of a scene by only several signal detected per pixel.
Existing deep learning models for this task are trained on simulated datasets, which poses the domain shift challenge when applied to realistic scenarios.
We propose a network (STIN) for photon-efficient imaging, which is able to precisely predict the depth from a sparse and high-noise photon counting histogram by fully exploiting spatial and temporal information.
arXiv Detail & Related papers (2022-01-07T14:51:48Z) - Robust photon-efficient imaging using a pixel-wise residual shrinkage
network [7.557893223548758]
Single-photon light detection and ranging (LiDAR) has been widely applied to 3D imaging in challenging scenarios.
limited signal photon counts and high noises in the collected data have posed great challenges for predicting the depth image precisely.
We propose a pixel-wise residual shrinkage network for photon-efficient imaging from high-noise data.
arXiv Detail & Related papers (2022-01-05T05:08:12Z) - Deep Learning Adapted Acceleration for Limited-view Photoacoustic
Computed Tomography [1.8830359888767887]
Photoacoustic computed tomography (PACT) uses unfocused large-area light to illuminate the target with ultrasound transducer array for PA signal detection.
Limited-view issue could cause a low-quality image in PACT due to the limitation of geometric condition.
A model-based method that combines the mathematical variational model with deep learning is proposed to speed up and regularize the unrolled procedure of reconstruction.
arXiv Detail & Related papers (2021-11-08T02:05:58Z) - Dense Optical Flow from Event Cameras [55.79329250951028]
We propose to incorporate feature correlation and sequential processing into dense optical flow estimation from event cameras.
Our proposed approach computes dense optical flow and reduces the end-point error by 23% on MVSEC.
arXiv Detail & Related papers (2021-08-24T07:39:08Z) - Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo [61.6260594326246]
We introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously.
Using separable 4D convolutions and 2D heat-maps reduces the size and makes more efficient.
arXiv Detail & Related papers (2021-03-22T18:06:58Z) - Single Image Brightening via Multi-Scale Exposure Fusion with Hybrid
Learning [48.890709236564945]
A small ISO and a small exposure time are usually used to capture an image in the back or low light conditions.
In this paper, a single image brightening algorithm is introduced to brighten such an image.
The proposed algorithm includes a unique hybrid learning framework to generate two virtual images with large exposure times.
arXiv Detail & Related papers (2020-07-04T08:23:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.