Point Spread Function Estimation for Wide Field Small Aperture
Telescopes with Deep Neural Networks and Calibration Data
- URL: http://arxiv.org/abs/2011.10243v2
- Date: Tue, 18 May 2021 11:48:57 GMT
- Title: Point Spread Function Estimation for Wide Field Small Aperture
Telescopes with Deep Neural Networks and Calibration Data
- Authors: Peng Jia, Xuebo Wu, Zhengyang Li, Bo Li, Weihua Wang, Qiang Liu, Adam
Popowicz
- Abstract summary: The point spread function (PSF) reflects states of a telescope.
estimating PSF in any position of the whole field of view is hard, because aberrations induced by the optical system are quite complex.
We further develop our deep neural network (DNN) based PSF modelling method and show its applications in PSF estimation.
- Score: 11.909250072362264
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The point spread function (PSF) reflects states of a telescope and plays an
important role in development of data processing methods, such as PSF based
astrometry, photometry and image restoration. However, for wide field small
aperture telescopes (WFSATs), estimating PSF in any position of the whole field
of view is hard, because aberrations induced by the optical system are quite
complex and the signal to noise ratio of star images is often too low for PSF
estimation. In this paper, we further develop our deep neural network (DNN)
based PSF modelling method and show its applications in PSF estimation. During
the telescope alignment and testing stage, our method collects system
calibration data through modification of optical elements within engineering
tolerances (tilting and decentering). Then we use these data to train a DNN
(Tel--Net). After training, the Tel--Net can estimate PSF in any field of view
from several discretely sampled star images. We use both simulated and
experimental data to test performance of our method. The results show that the
Tel--Net can successfully reconstruct PSFs of WFSATs of any states and in any
positions of the FoV. Its results are significantly more precise than results
obtained by the compared classic method - Inverse Distance Weight (IDW)
interpolation. Our method provides foundations for developing of deep neural
network based data processing methods for WFSATs, which require strong prior
information of PSFs.
Related papers
- Robust Depth Enhancement via Polarization Prompt Fusion Tuning [112.88371907047396]
We present a framework that leverages polarization imaging to improve inaccurate depth measurements from various depth sensors.
Our method first adopts a learning-based strategy where a neural network is trained to estimate a dense and complete depth map from polarization data and a sensor depth map from different sensors.
To further improve the performance, we propose a Polarization Prompt Fusion Tuning (PPFT) strategy to effectively utilize RGB-based models pre-trained on large-scale datasets.
arXiv Detail & Related papers (2024-04-05T17:55:33Z) - PI-AstroDeconv: A Physics-Informed Unsupervised Learning Method for
Astronomical Image Deconvolution [10.065997984277605]
We propose an unsupervised network architecture that incorporates prior physical information.
The network adopts an encoder-decoder structure while leveraging the telescope's PSF as prior knowledge.
arXiv Detail & Related papers (2024-03-04T02:52:29Z) - Perception of Misalignment States for Sky Survey Telescopes with the
Digital Twin and the Deep Neural Networks [16.245776159991294]
We propose a deep neural network to extract misalignment states from continuously varying point spread functions in different field of views.
We store misalignment data and explore complex relationships between misalignment states and corresponding point spread functions.
The method could be used to provide prior information for the active optics system and the optical system alignment.
arXiv Detail & Related papers (2023-11-30T03:16:27Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - Rethinking data-driven point spread function modeling with a
differentiable optical model [0.19947949439280027]
In astronomy, upcoming space telescopes with wide-field optical instruments have a spatially varying point spread function (PSF)
Current data-driven PSF models can tackle spatial variations and super-resolution, but are not capable of capturing chromatic variations.
By adding a differentiable optical forward model into the modeling framework, we change the data-driven modeling space from the pixels to the wavefront.
arXiv Detail & Related papers (2022-03-09T17:39:18Z) - Learning Neural Light Fields with Ray-Space Embedding Networks [51.88457861982689]
We propose a novel neural light field representation that is compact and directly predicts integrated radiance along rays.
Our method achieves state-of-the-art quality on dense forward-facing datasets such as the Stanford Light Field dataset.
arXiv Detail & Related papers (2021-12-02T18:59:51Z) - Spatially-Variant CNN-based Point Spread Function Estimation for Blind
Deconvolution and Depth Estimation in Optical Microscopy [6.09170287691728]
We present a method that improves the resolution of light microscopy images of thin, yet non-flat objects.
We estimate the parameters of a spatially-variant Point-Spread function (PSF) model using a Convolutional Neural Network (CNN)
Our method recovers PSF parameters from the image itself with up to a squared Pearson correlation coefficient of 0.99 in ideal conditions.
arXiv Detail & Related papers (2020-10-08T14:20:16Z) - Neural Ray Surfaces for Self-Supervised Learning of Depth and Ego-motion [51.19260542887099]
We show that self-supervision can be used to learn accurate depth and ego-motion estimation without prior knowledge of the camera model.
Inspired by the geometric model of Grossberg and Nayar, we introduce Neural Ray Surfaces (NRS), convolutional networks that represent pixel-wise projection rays.
We demonstrate the use of NRS for self-supervised learning of visual odometry and depth estimation from raw videos obtained using a wide variety of camera systems.
arXiv Detail & Related papers (2020-08-15T02:29:13Z) - Light Field Spatial Super-resolution via Deep Combinatorial Geometry
Embedding and Structural Consistency Regularization [99.96632216070718]
Light field (LF) images acquired by hand-held devices usually suffer from low spatial resolution.
The high-dimensional spatiality characteristic and complex geometrical structure of LF images make the problem more challenging than traditional single-image SR.
We propose a novel learning-based LF framework, in which each view of an LF image is first individually super-resolved.
arXiv Detail & Related papers (2020-04-05T14:39:57Z) - Single Image Depth Estimation Trained via Depth from Defocus Cues [105.67073923825842]
Estimating depth from a single RGB image is a fundamental task in computer vision.
In this work, we rely, instead of different views, on depth from focus cues.
We present results that are on par with supervised methods on KITTI and Make3D datasets and outperform unsupervised learning approaches.
arXiv Detail & Related papers (2020-01-14T20:22:54Z) - Learning Wavefront Coding for Extended Depth of Field Imaging [4.199844472131922]
Extended depth of field (EDoF) imaging is a challenging ill-posed problem.
We propose a computational imaging approach for EDoF, where we employ wavefront coding via a diffractive optical element.
We demonstrate results with minimal artifacts in various scenarios, including deep 3D scenes and broadband imaging.
arXiv Detail & Related papers (2019-12-31T17:00:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.