Precise Point Spread Function Estimation
- URL: http://arxiv.org/abs/2203.02953v1
- Date: Sun, 6 Mar 2022 12:43:27 GMT
- Title: Precise Point Spread Function Estimation
- Authors: Renzhi He, Yan Zhuang, Boya Fu, Fei Liu
- Abstract summary: We develop a precise mathematical model of the camera's point spread function to describe the defocus process.
Our experiments on standard planes and actual objects show that the proposed algorithm can accurately describe the defocus process.
- Score: 6.076995573805468
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Point spread function (PSF) plays a crucial role in many fields, such as
shape from focus/defocus, depth estimation, and imaging process in fluorescence
microscopy. However, the mathematical model of the defocus process is still
unclear because several variables in the point spread function are hard to
measure accurately, such as the f-number of cameras, the physical size of a
pixel, the focus depth, etc. In this work, we develop a precise mathematical
model of the camera's point spread function to describe the defocus process. We
first derive the mathematical algorithm for the PSF and extract two parameters
A and e. A is the composite of camera's f-number, pixel-size, output scale, and
scaling factor of the circle of confusion; e is the deviation of the focus
depth. We design a novel metric based on the defocus histogram to evaluate the
difference between the simulated focused image and the actual focused image to
obtain optimal A and e. We also construct a hardware system consisting of a
focusing system and a structured light system to acquire the all-in-focus
image, the focused image with corresponding focus depth, and the depth map in
the same view. The three types of images, as a dataset, are used to obtain the
precise PSF. Our experiments on standard planes and actual objects show that
the proposed algorithm can accurately describe the defocus process. The
accuracy of our algorithm is further proved by evaluating the difference among
the actual focused images, the focused image generated by our algorithm, the
focused image generated by others. The results show that the loss of our
algorithm is 40% less than others on average. The dataset, code, and model are
available on GitHub: https://github.com/cubhe/
precise-point-spread-function-estimation.
Related papers
- Towards Real-World Focus Stacking with Deep Learning [97.34754533628322]
We introduce a new dataset consisting of 94 high-resolution bursts of raw images with focus bracketing.
This dataset is used to train the first deep learning algorithm for focus stacking capable of handling bursts of sufficient length for real-world applications.
arXiv Detail & Related papers (2023-11-29T17:49:33Z) - Fully Self-Supervised Depth Estimation from Defocus Clue [79.63579768496159]
We propose a self-supervised framework that estimates depth purely from a sparse focal stack.
We show that our framework circumvents the needs for the depth and AIF image ground-truth, and receives superior predictions.
arXiv Detail & Related papers (2023-03-19T19:59:48Z) - Deep Depth from Focus with Differential Focus Volume [17.505649653615123]
We propose a convolutional neural network (CNN) to find the best-focused pixels in a focal stack and infer depth from the focus estimation.
The key innovation of the network is the novel deep differential focus volume (DFV)
arXiv Detail & Related papers (2021-12-03T04:49:51Z) - Single image deep defocus estimation and its applications [82.93345261434943]
We train a deep neural network to classify image patches into one of the 20 levels of blurriness.
The trained model is used to determine the patch blurriness which is then refined by applying an iterative weighted guided filter.
The result is a defocus map that carries the information of the degree of blurriness for each pixel.
arXiv Detail & Related papers (2021-07-30T06:18:16Z) - Real-Time, Deep Synthetic Aperture Sonar (SAS) Autofocus [34.77467193499518]
Synthetic aperture sonar (SAS) requires precise time-of-flight measurements of the transmitted/received waveform to produce well-focused imagery.
To overcome this, an emphautofocus algorithm is employed as a post-processing step after image reconstruction to improve image focus.
We propose a deep learning technique to overcome these limitations and implicitly learn the weighting function in a data-driven manner.
arXiv Detail & Related papers (2021-03-18T15:16:29Z) - A learning-based view extrapolation method for axial super-resolution [52.748944517480155]
Axial light field resolution refers to the ability to distinguish features at different depths by refocusing.
We propose a learning-based method to extrapolate novel views from axial volumes of sheared epipolar plane images.
arXiv Detail & Related papers (2021-03-11T07:22:13Z) - Deep Autofocus for Synthetic Aperture Sonar [28.306713374371814]
In this letter, we demonstrate the potential of machine learning, specifically deep learning, to address the autofocus problem.
We formulate the problem as a self-supervised, phase error estimation task using a deep network we call Deep Autofocus.
Our results demonstrate Deep Autofocus can produce imagery that is perceptually as good as benchmark iterative techniques but at a substantially lower computational cost.
arXiv Detail & Related papers (2020-10-29T15:31:15Z) - Defocus Blur Detection via Depth Distillation [64.78779830554731]
We introduce depth information into DBD for the first time.
In detail, we learn the defocus blur from ground truth and the depth distilled from a well-trained depth estimation network.
Our approach outperforms 11 other state-of-the-art methods on two popular datasets.
arXiv Detail & Related papers (2020-07-16T04:58:09Z) - Single Image Depth Estimation Trained via Depth from Defocus Cues [105.67073923825842]
Estimating depth from a single RGB image is a fundamental task in computer vision.
In this work, we rely, instead of different views, on depth from focus cues.
We present results that are on par with supervised methods on KITTI and Make3D datasets and outperform unsupervised learning approaches.
arXiv Detail & Related papers (2020-01-14T20:22:54Z) - DeepFocus: a Few-Shot Microscope Slide Auto-Focus using a Sample
Invariant CNN-based Sharpness Function [6.09170287691728]
Autofocus (AF) methods are extensively used in biomicroscopy, for example to acquire timelapses.
Current hardware-based methods require modifying the microscope and image-based algorithms.
We propose DeepFocus, an AF method we implemented as a Micro-Manager plugin.
arXiv Detail & Related papers (2020-01-02T23:29:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.