Rapid Whole Slide Imaging via Learning-based Two-shot Virtual
Autofocusing
- URL: http://arxiv.org/abs/2003.06630v1
- Date: Sat, 14 Mar 2020 13:40:33 GMT
- Title: Rapid Whole Slide Imaging via Learning-based Two-shot Virtual
Autofocusing
- Authors: Qiang Li, Xianming Liu, Kaige Han, Cheng Guo, Xiangyang Ji, and
Xiaolin Wu
- Abstract summary: Whole slide imaging (WSI) is an emerging technology for digital pathology.
We propose the concept of textitvirtual autofocusing, which does not rely on mechanical adjustment to conduct refocusing.
- Score: 57.90239401665367
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Whole slide imaging (WSI) is an emerging technology for digital pathology.
The process of autofocusing is the main influence of the performance of WSI.
Traditional autofocusing methods either are time-consuming due to repetitive
mechanical motions, or require additional hardware and thus are not compatible
to current WSI systems. In this paper, we propose the concept of
\textit{virtual autofocusing}, which does not rely on mechanical adjustment to
conduct refocusing but instead recovers in-focus images in an offline
learning-based manner. With the initial focal position, we only perform
two-shot imaging, in contrast traditional methods commonly need to conduct as
many as 21 times image shooting in each tile scanning. Considering that the two
captured out-of-focus images retain pieces of partial information about the
underlying in-focus image, we propose a U-Net-inspired deep neural network
based approach for fusing them into a recovered in-focus image. The proposed
scheme is fast in tissue slides scanning, enabling a high-throughput generation
of digital pathology images. Experimental results demonstrate that our scheme
achieves satisfactory refocusing performance.
Related papers
- Towards Real-World Focus Stacking with Deep Learning [97.34754533628322]
We introduce a new dataset consisting of 94 high-resolution bursts of raw images with focus bracketing.
This dataset is used to train the first deep learning algorithm for focus stacking capable of handling bursts of sufficient length for real-world applications.
arXiv Detail & Related papers (2023-11-29T17:49:33Z) - Recovering Continuous Scene Dynamics from A Single Blurry Image with
Events [58.7185835546638]
An Implicit Video Function (IVF) is learned to represent a single motion blurred image with concurrent events.
A dual attention transformer is proposed to efficiently leverage merits from both modalities.
The proposed network is trained only with the supervision of ground-truth images of limited referenced timestamps.
arXiv Detail & Related papers (2023-04-05T18:44:17Z) - Learnable Blur Kernel for Single-Image Defocus Deblurring in the Wild [9.246199263116067]
We propose a novel defocus deblurring method that uses the guidance of the defocus map to implement image deblurring.
The proposed method consists of a learnable blur kernel to estimate the defocus map, and a single-image defocus deblurring generative adversarial network (DefocusGAN) for the first time.
arXiv Detail & Related papers (2022-11-25T10:47:19Z) - Learning to Deblur using Light Field Generated and Real Defocus Images [4.926805108788465]
Defocus deblurring is a challenging task due to the spatially varying nature of defocus blur.
We propose a novel deep defocus deblurring network that leverages the strength and overcomes the shortcoming of light fields.
arXiv Detail & Related papers (2022-04-01T11:35:51Z) - Defocus Map Estimation and Deblurring from a Single Dual-Pixel Image [54.10957300181677]
We present a method that takes as input a single dual-pixel image, and simultaneously estimates the image's defocus map.
Our approach improves upon prior works for both defocus map estimation and blur removal, despite being entirely unsupervised.
arXiv Detail & Related papers (2021-10-12T00:09:07Z) - Real-Time, Deep Synthetic Aperture Sonar (SAS) Autofocus [34.77467193499518]
Synthetic aperture sonar (SAS) requires precise time-of-flight measurements of the transmitted/received waveform to produce well-focused imagery.
To overcome this, an emphautofocus algorithm is employed as a post-processing step after image reconstruction to improve image focus.
We propose a deep learning technique to overcome these limitations and implicitly learn the weighting function in a data-driven manner.
arXiv Detail & Related papers (2021-03-18T15:16:29Z) - Deep Autofocus for Synthetic Aperture Sonar [28.306713374371814]
In this letter, we demonstrate the potential of machine learning, specifically deep learning, to address the autofocus problem.
We formulate the problem as a self-supervised, phase error estimation task using a deep network we call Deep Autofocus.
Our results demonstrate Deep Autofocus can produce imagery that is perceptually as good as benchmark iterative techniques but at a substantially lower computational cost.
arXiv Detail & Related papers (2020-10-29T15:31:15Z) - Rethinking of the Image Salient Object Detection: Object-level Semantic
Saliency Re-ranking First, Pixel-wise Saliency Refinement Latter [62.26677215668959]
We propose a lightweight, weakly supervised deep network to coarsely locate semantically salient regions.
We then fuse multiple off-the-shelf deep models on these semantically salient regions as the pixel-wise saliency refinement.
Our method is simple yet effective, which is the first attempt to consider the salient object detection mainly as an object-level semantic re-ranking problem.
arXiv Detail & Related papers (2020-08-10T07:12:43Z) - Single-shot autofocusing of microscopy images using deep learning [0.30586855806896046]
Deep learning-based offline autofocusing method, termed Deep-R, is trained to rapidly and blindly autofocus a single-shot microscopy image.
Deep-R is significantly faster when compared with standard online algorithmic autofocusing methods.
arXiv Detail & Related papers (2020-03-21T06:07:27Z) - Self-Supervised Linear Motion Deblurring [112.75317069916579]
Deep convolutional neural networks are state-of-the-art for image deblurring.
We present a differentiable reblur model for self-supervised motion deblurring.
Our experiments demonstrate that self-supervised single image deblurring is really feasible.
arXiv Detail & Related papers (2020-02-10T20:15:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.