DeepFocus: a Few-Shot Microscope Slide Auto-Focus using a Sample
Invariant CNN-based Sharpness Function
- URL: http://arxiv.org/abs/2001.00667v1
- Date: Thu, 2 Jan 2020 23:29:11 GMT
- Title: DeepFocus: a Few-Shot Microscope Slide Auto-Focus using a Sample
Invariant CNN-based Sharpness Function
- Authors: Adrian Shajkofci, Michael Liebling
- Abstract summary: Autofocus (AF) methods are extensively used in biomicroscopy, for example to acquire timelapses.
Current hardware-based methods require modifying the microscope and image-based algorithms.
We propose DeepFocus, an AF method we implemented as a Micro-Manager plugin.
- Score: 6.09170287691728
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autofocus (AF) methods are extensively used in biomicroscopy, for example to
acquire timelapses, where the imaged objects tend to drift out of focus. AD
algorithms determine an optimal distance by which to move the sample back into
the focal plane. Current hardware-based methods require modifying the
microscope and image-based algorithms either rely on many images to converge to
the sharpest position or need training data and models specific to each
instrument and imaging configuration. Here we propose DeepFocus, an AF method
we implemented as a Micro-Manager plugin, and characterize its Convolutional
neural network-based sharpness function, which we observed to be depth
co-variant and sample-invariant. Sample invariance allows our AF algorithm to
converge to an optimal axial position within as few as three iterations using a
model trained once for use with a wide range of optical microscopes and a
single instrument-dependent calibration stack acquisition of a flat (but
arbitrary) textured object. From experiments carried out both on synthetic and
experimental data, we observed an average precision, given 3 measured images,
of 0.30 +- 0.16 micrometers with a 10x, NA 0.3 objective. We foresee that this
performance and low image number will help limit photodamage during
acquisitions with light-sensitive samples.
Related papers
- InstantSplat: Sparse-view SfM-free Gaussian Splatting in Seconds [91.77050739918037]
Novel view synthesis (NVS) from a sparse set of images has advanced significantly in 3D computer vision.
It relies on precise initial estimation of camera parameters using Structure-from-Motion (SfM)
In this study, we introduce a novel and efficient framework to enhance robust NVS from sparse-view images.
arXiv Detail & Related papers (2024-03-29T17:29:58Z) - Towards Real-World Focus Stacking with Deep Learning [97.34754533628322]
We introduce a new dataset consisting of 94 high-resolution bursts of raw images with focus bracketing.
This dataset is used to train the first deep learning algorithm for focus stacking capable of handling bursts of sufficient length for real-world applications.
arXiv Detail & Related papers (2023-11-29T17:49:33Z) - Precise Point Spread Function Estimation [6.076995573805468]
We develop a precise mathematical model of the camera's point spread function to describe the defocus process.
Our experiments on standard planes and actual objects show that the proposed algorithm can accurately describe the defocus process.
arXiv Detail & Related papers (2022-03-06T12:43:27Z) - Low dosage 3D volume fluorescence microscopy imaging using compressive
sensing [0.0]
We present a compressive sensing (CS) based approach to fully reconstruct 3D volumes with the same signal-to-noise ratio (SNR) with less than half of the excitation dosage.
We demonstrate our technique by capturing a 3D volume of the RFP labeled neurons in the zebrafish embryo spinal cord with the axial sampling of 0.1um using a confocal microscope.
The developed CS-based methodology in this work can be easily applied to other deep imaging modalities such as two-photon and light-sheet microscopy, where reducing sample photo-toxicity is a critical challenge.
arXiv Detail & Related papers (2022-01-03T18:44:50Z) - MC-Blur: A Comprehensive Benchmark for Image Deblurring [127.6301230023318]
In most real-world images, blur is caused by different factors, e.g., motion and defocus.
We construct a new large-scale multi-cause image deblurring dataset (called MC-Blur)
Based on the MC-Blur dataset, we conduct extensive benchmarking studies to compare SOTA methods in different scenarios.
arXiv Detail & Related papers (2021-12-01T02:10:42Z) - Leveraging blur information for plenoptic camera calibration [6.0982543764998995]
This paper presents a novel calibration algorithm for plenoptic cameras, especially the multi-focus configuration.
In the multi-focus configuration, the same part of a scene will demonstrate different amounts of blur according to the micro-lens focal length.
Usually, only micro-images with the smallest amount of blur are used.
We propose to explicitly model the defocus blur in a new camera model with the help of our newly introduced Blur Aware Plenoptic feature.
arXiv Detail & Related papers (2021-11-09T16:07:07Z) - Pixel-Perfect Structure-from-Motion with Featuremetric Refinement [96.73365545609191]
We refine two key steps of structure-from-motion by a direct alignment of low-level image information from multiple views.
This significantly improves the accuracy of camera poses and scene geometry for a wide range of keypoint detectors.
Our system easily scales to large image collections, enabling pixel-perfect crowd-sourced localization at scale.
arXiv Detail & Related papers (2021-08-18T17:58:55Z) - Single image deep defocus estimation and its applications [82.93345261434943]
We train a deep neural network to classify image patches into one of the 20 levels of blurriness.
The trained model is used to determine the patch blurriness which is then refined by applying an iterative weighted guided filter.
The result is a defocus map that carries the information of the degree of blurriness for each pixel.
arXiv Detail & Related papers (2021-07-30T06:18:16Z) - Spatially-Variant CNN-based Point Spread Function Estimation for Blind
Deconvolution and Depth Estimation in Optical Microscopy [6.09170287691728]
We present a method that improves the resolution of light microscopy images of thin, yet non-flat objects.
We estimate the parameters of a spatially-variant Point-Spread function (PSF) model using a Convolutional Neural Network (CNN)
Our method recovers PSF parameters from the image itself with up to a squared Pearson correlation coefficient of 0.99 in ideal conditions.
arXiv Detail & Related papers (2020-10-08T14:20:16Z) - Single-shot autofocusing of microscopy images using deep learning [0.30586855806896046]
Deep learning-based offline autofocusing method, termed Deep-R, is trained to rapidly and blindly autofocus a single-shot microscopy image.
Deep-R is significantly faster when compared with standard online algorithmic autofocusing methods.
arXiv Detail & Related papers (2020-03-21T06:07:27Z) - Multi-View Photometric Stereo: A Robust Solution and Benchmark Dataset
for Spatially Varying Isotropic Materials [65.95928593628128]
We present a method to capture both 3D shape and spatially varying reflectance with a multi-view photometric stereo technique.
Our algorithm is suitable for perspective cameras and nearby point light sources.
arXiv Detail & Related papers (2020-01-18T12:26:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.