Spatially-Variant CNN-based Point Spread Function Estimation for Blind
Deconvolution and Depth Estimation in Optical Microscopy
- URL: http://arxiv.org/abs/2010.04011v2
- Date: Tue, 13 Oct 2020 09:39:50 GMT
- Title: Spatially-Variant CNN-based Point Spread Function Estimation for Blind
Deconvolution and Depth Estimation in Optical Microscopy
- Authors: Adrian Shajkofci, Michael Liebling
- Abstract summary: We present a method that improves the resolution of light microscopy images of thin, yet non-flat objects.
We estimate the parameters of a spatially-variant Point-Spread function (PSF) model using a Convolutional Neural Network (CNN)
Our method recovers PSF parameters from the image itself with up to a squared Pearson correlation coefficient of 0.99 in ideal conditions.
- Score: 6.09170287691728
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optical microscopy is an essential tool in biology and medicine. Imaging
thin, yet non-flat objects in a single shot (without relying on more
sophisticated sectioning setups) remains challenging as the shallow depth of
field that comes with high-resolution microscopes leads to unsharp image
regions and makes depth localization and quantitative image interpretation
difficult.
Here, we present a method that improves the resolution of light microscopy
images of such objects by locally estimating image distortion while jointly
estimating object distance to the focal plane. Specifically, we estimate the
parameters of a spatially-variant Point-Spread function (PSF) model using a
Convolutional Neural Network (CNN), which does not require instrument- or
object-specific calibration. Our method recovers PSF parameters from the image
itself with up to a squared Pearson correlation coefficient of 0.99 in ideal
conditions, while remaining robust to object rotation, illumination variations,
or photon noise. When the recovered PSFs are used with a spatially-variant and
regularized Richardson-Lucy deconvolution algorithm, we observed up to 2.1 dB
better signal-to-noise ratio compared to other blind deconvolution techniques.
Following microscope-specific calibration, we further demonstrate that the
recovered PSF model parameters permit estimating surface depth with a precision
of 2 micrometers and over an extended range when using engineered PSFs. Our
method opens up multiple possibilities for enhancing images of non-flat objects
with minimal need for a priori knowledge about the optical setup.
Related papers
- Fearless Luminance Adaptation: A Macro-Micro-Hierarchical Transformer
for Exposure Correction [65.5397271106534]
A single neural network is difficult to handle all exposure problems.
In particular, convolutions hinder the ability to restore faithful color or details on extremely over-/under- exposed regions.
We propose a Macro-Micro-Hierarchical transformer, which consists of a macro attention to capture long-range dependencies, a micro attention to extract local features, and a hierarchical structure for coarse-to-fine correction.
arXiv Detail & Related papers (2023-09-02T09:07:36Z) - Single Image Depth Prediction Made Better: A Multivariate Gaussian Take [163.14849753700682]
We introduce an approach that performs continuous modeling of per-pixel depth.
Our method's accuracy (named MG) is among the top on the KITTI depth-prediction benchmark leaderboard.
arXiv Detail & Related papers (2023-03-31T16:01:03Z) - Fluctuation-based deconvolution in fluorescence microscopy using
plug-and-play denoisers [2.236663830879273]
spatial resolution of images of living samples obtained by fluorescence microscopes is physically limited due to the diffraction of visible light.
Several deconvolution and super-resolution techniques have been proposed to overcome this limitation.
arXiv Detail & Related papers (2023-03-20T15:43:52Z) - Pixelated Reconstruction of Foreground Density and Background Surface
Brightness in Gravitational Lensing Systems using Recurrent Inference
Machines [116.33694183176617]
We use a neural network based on the Recurrent Inference Machine to reconstruct an undistorted image of the background source and the lens mass density distribution as pixelated maps.
When compared to more traditional parametric models, the proposed method is significantly more expressive and can reconstruct complex mass distributions.
arXiv Detail & Related papers (2023-01-10T19:00:12Z) - Frequency-Aware Self-Supervised Monocular Depth Estimation [41.97188738587212]
We present two versatile methods to enhance self-supervised monocular depth estimation models.
The high generalizability of our methods is achieved by solving the fundamental and ubiquitous problems in photometric loss function.
We are the first to propose blurring images to improve depth estimators with an interpretable analysis.
arXiv Detail & Related papers (2022-10-11T14:30:26Z) - Estimation of Optical Aberrations in 3D Microscopic Bioimages [1.588193964339148]
We describe an extension of PhaseNet enabling its use on 3D images of biological samples.
We add a Python-based restoration of images via Richardson-Lucy deconvolution.
We demonstrate that the deconvolution with the predicted PSF can not only remove the simulated aberrations but also improve the quality of the real raw microscopic images with unknown residual PSF.
arXiv Detail & Related papers (2022-09-16T13:22:25Z) - End-to-end Learning for Joint Depth and Image Reconstruction from
Diffracted Rotation [10.896567381206715]
We propose a novel end-to-end learning approach for depth from diffracted rotation.
Our approach requires a significantly less complex model and less training data, yet it is superior to existing methods in the task of monocular depth estimation.
arXiv Detail & Related papers (2022-04-14T16:14:37Z) - Universal and Flexible Optical Aberration Correction Using Deep-Prior
Based Deconvolution [51.274657266928315]
We propose a PSF aware plug-and-play deep network, which takes the aberrant image and PSF map as input and produces the latent high quality version via incorporating lens-specific deep priors.
Specifically, we pre-train a base model from a set of diverse lenses and then adapt it to a given lens by quickly refining the parameters.
arXiv Detail & Related papers (2021-04-07T12:00:38Z) - Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo [61.6260594326246]
We introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously.
Using separable 4D convolutions and 2D heat-maps reduces the size and makes more efficient.
arXiv Detail & Related papers (2021-03-22T18:06:58Z) - A learning-based view extrapolation method for axial super-resolution [52.748944517480155]
Axial light field resolution refers to the ability to distinguish features at different depths by refocusing.
We propose a learning-based method to extrapolate novel views from axial volumes of sheared epipolar plane images.
arXiv Detail & Related papers (2021-03-11T07:22:13Z) - DeepFocus: a Few-Shot Microscope Slide Auto-Focus using a Sample
Invariant CNN-based Sharpness Function [6.09170287691728]
Autofocus (AF) methods are extensively used in biomicroscopy, for example to acquire timelapses.
Current hardware-based methods require modifying the microscope and image-based algorithms.
We propose DeepFocus, an AF method we implemented as a Micro-Manager plugin.
arXiv Detail & Related papers (2020-01-02T23:29:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.