Single-shot autofocusing of microscopy images using deep learning
- URL: http://arxiv.org/abs/2003.09585v2
- Date: Fri, 22 Jan 2021 06:20:14 GMT
- Title: Single-shot autofocusing of microscopy images using deep learning
- Authors: Yilin Luo, Luzhe Huang, Yair Rivenson, Aydogan Ozcan
- Abstract summary: Deep learning-based offline autofocusing method, termed Deep-R, is trained to rapidly and blindly autofocus a single-shot microscopy image.
Deep-R is significantly faster when compared with standard online algorithmic autofocusing methods.
- Score: 0.30586855806896046
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We demonstrate a deep learning-based offline autofocusing method, termed
Deep-R, that is trained to rapidly and blindly autofocus a single-shot
microscopy image of a specimen that is acquired at an arbitrary out-of-focus
plane. We illustrate the efficacy of Deep-R using various tissue sections that
were imaged using fluorescence and brightfield microscopy modalities and
demonstrate snapshot autofocusing under different scenarios, such as a uniform
axial defocus as well as a sample tilt within the field-of-view. Our results
reveal that Deep-R is significantly faster when compared with standard online
algorithmic autofocusing methods. This deep learning-based blind autofocusing
framework opens up new opportunities for rapid microscopic imaging of large
sample areas, also reducing the photon dose on the sample.
Related papers
- Towards Real-World Focus Stacking with Deep Learning [97.34754533628322]
We introduce a new dataset consisting of 94 high-resolution bursts of raw images with focus bracketing.
This dataset is used to train the first deep learning algorithm for focus stacking capable of handling bursts of sufficient length for real-world applications.
arXiv Detail & Related papers (2023-11-29T17:49:33Z) - Defocus Blur Synthesis and Deblurring via Interpolation and
Extrapolation in Latent Space [3.097163558730473]
We train autoencoders with implicit and explicit regularization techniques to enforce linearity relations.
Compared to existing works, we use a simple architecture to synthesize images with flexible blur levels.
Our regularized autoencoders can effectively mimic blur and deblur, increasing data variety as a data augmentation technique.
arXiv Detail & Related papers (2023-07-28T10:27:28Z) - Fully Self-Supervised Depth Estimation from Defocus Clue [79.63579768496159]
We propose a self-supervised framework that estimates depth purely from a sparse focal stack.
We show that our framework circumvents the needs for the depth and AIF image ground-truth, and receives superior predictions.
arXiv Detail & Related papers (2023-03-19T19:59:48Z) - Deep Depth from Focus with Differential Focus Volume [17.505649653615123]
We propose a convolutional neural network (CNN) to find the best-focused pixels in a focal stack and infer depth from the focus estimation.
The key innovation of the network is the novel deep differential focus volume (DFV)
arXiv Detail & Related papers (2021-12-03T04:49:51Z) - Single image deep defocus estimation and its applications [82.93345261434943]
We train a deep neural network to classify image patches into one of the 20 levels of blurriness.
The trained model is used to determine the patch blurriness which is then refined by applying an iterative weighted guided filter.
The result is a defocus map that carries the information of the degree of blurriness for each pixel.
arXiv Detail & Related papers (2021-07-30T06:18:16Z) - A parameter refinement method for Ptychography based on Deep Learning
concepts [55.41644538483948]
coarse parametrisation in propagation distance, position errors and partial coherence frequently menaces the experiment viability.
A modern Deep Learning framework is used to correct autonomously the setup incoherences, thus improving the quality of a ptychography reconstruction.
We tested our system on both synthetic datasets and also on real data acquired at the TwinMic beamline of the Elettra synchrotron facility.
arXiv Detail & Related papers (2021-05-18T10:15:17Z) - A learning-based view extrapolation method for axial super-resolution [52.748944517480155]
Axial light field resolution refers to the ability to distinguish features at different depths by refocusing.
We propose a learning-based method to extrapolate novel views from axial volumes of sheared epipolar plane images.
arXiv Detail & Related papers (2021-03-11T07:22:13Z) - Deep Autofocus for Synthetic Aperture Sonar [28.306713374371814]
In this letter, we demonstrate the potential of machine learning, specifically deep learning, to address the autofocus problem.
We formulate the problem as a self-supervised, phase error estimation task using a deep network we call Deep Autofocus.
Our results demonstrate Deep Autofocus can produce imagery that is perceptually as good as benchmark iterative techniques but at a substantially lower computational cost.
arXiv Detail & Related papers (2020-10-29T15:31:15Z) - Rapid Whole Slide Imaging via Learning-based Two-shot Virtual
Autofocusing [57.90239401665367]
Whole slide imaging (WSI) is an emerging technology for digital pathology.
We propose the concept of textitvirtual autofocusing, which does not rely on mechanical adjustment to conduct refocusing.
arXiv Detail & Related papers (2020-03-14T13:40:33Z) - DeepFocus: a Few-Shot Microscope Slide Auto-Focus using a Sample
Invariant CNN-based Sharpness Function [6.09170287691728]
Autofocus (AF) methods are extensively used in biomicroscopy, for example to acquire timelapses.
Current hardware-based methods require modifying the microscope and image-based algorithms.
We propose DeepFocus, an AF method we implemented as a Micro-Manager plugin.
arXiv Detail & Related papers (2020-01-02T23:29:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.