Virtual staining of defocused autofluorescence images of unlabeled
tissue using deep neural networks
- URL: http://arxiv.org/abs/2207.02946v1
- Date: Wed, 6 Jul 2022 19:55:37 GMT
- Title: Virtual staining of defocused autofluorescence images of unlabeled
tissue using deep neural networks
- Authors: Yijie Zhang, Luzhe Huang, Tairan Liu, Keyi Cheng, Kevin de Haan, Yuzhu
Li, Bijie Bai, Aydogan Ozcan
- Abstract summary: We introduce a fast virtual staining framework that can stain defocused autofluorescence images of unlabeled tissue.
This framework incorporates a virtual-autofocusing neural network to digitally refocus the defocused images and then transforms the refocused images into virtually stained images.
- Score: 2.433294561208518
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning-based virtual staining was developed to introduce image
contrast to label-free tissue sections, digitally matching the histological
staining, which is time-consuming, labor-intensive, and destructive to tissue.
Standard virtual staining requires high autofocusing precision during the whole
slide imaging of label-free tissue, which consumes a significant portion of the
total imaging time and can lead to tissue photodamage. Here, we introduce a
fast virtual staining framework that can stain defocused autofluorescence
images of unlabeled tissue, achieving equivalent performance to virtual
staining of in-focus label-free images, also saving significant imaging time by
lowering the microscope's autofocusing precision. This framework incorporates a
virtual-autofocusing neural network to digitally refocus the defocused images
and then transforms the refocused images into virtually stained images using a
successive network. These cascaded networks form a collaborative inference
scheme: the virtual staining model regularizes the virtual-autofocusing network
through a style loss during the training. To demonstrate the efficacy of this
framework, we trained and blindly tested these networks using human lung
tissue. Using 4x fewer focus points with 2x lower focusing precision, we
successfully transformed the coarsely-focused autofluorescence images into
high-quality virtually stained H&E images, matching the standard virtual
staining framework that used finely-focused autofluorescence input images.
Without sacrificing the staining quality, this framework decreases the total
image acquisition time needed for virtual staining of a label-free whole-slide
image (WSI) by ~32%, together with a ~89% decrease in the autofocusing time,
and has the potential to eliminate the laborious and costly histochemical
staining process in pathology.
Related papers
- Super-resolved virtual staining of label-free tissue using diffusion models [2.8661150986074384]
This study presents a diffusion model-based super-resolution virtual staining approach utilizing a Brownian bridge process.
Our approach integrates novel sampling techniques into a diffusion model-based image inference process.
Blindly applied to lower-resolution auto-fluorescence images of label-free human lung tissue samples, the diffusion-based super-resolution virtual staining model consistently outperformed conventional approaches in resolution, structural similarity and perceptual accuracy.
arXiv Detail & Related papers (2024-10-26T04:31:17Z) - Single color digital H&E staining with In-and-Out Net [0.8271394038014485]
This paper introduces a novel network, In-and-Out Net, specifically designed for virtual staining tasks.
Based on Generative Adversarial Networks (GAN), our model efficiently transforms Reflectance Confocal Microscopy (RCM) images into Hematoxylin and Eosin stained images.
arXiv Detail & Related papers (2024-05-22T01:17:27Z) - Autonomous Quality and Hallucination Assessment for Virtual Tissue Staining and Digital Pathology [0.11728348229595655]
We present an autonomous quality and hallucination assessment method (termed AQuA) for virtual tissue staining.
AQuA achieves 99.8% accuracy when detecting acceptable and unacceptable virtually stained tissue images.
arXiv Detail & Related papers (2024-04-29T06:32:28Z) - Recovering Continuous Scene Dynamics from A Single Blurry Image with
Events [58.7185835546638]
An Implicit Video Function (IVF) is learned to represent a single motion blurred image with concurrent events.
A dual attention transformer is proposed to efficiently leverage merits from both modalities.
The proposed network is trained only with the supervision of ground-truth images of limited referenced timestamps.
arXiv Detail & Related papers (2023-04-05T18:44:17Z) - Virtual stain transfer in histology via cascaded deep neural networks [2.309018557701645]
We demonstrate a virtual stain transfer framework via a cascaded deep neural network (C-DNN)
Unlike a single neural network structure which only takes one stain type as input to digitally output images of another stain type, C-DNN first uses virtual staining to transform autofluorescence microscopy images into H&E.
We successfully transferred the H&E-stained tissue images into virtual PAS (periodic acid-Schiff) stain.
arXiv Detail & Related papers (2022-07-14T00:43:18Z) - Universal and Flexible Optical Aberration Correction Using Deep-Prior
Based Deconvolution [51.274657266928315]
We propose a PSF aware plug-and-play deep network, which takes the aberrant image and PSF map as input and produces the latent high quality version via incorporating lens-specific deep priors.
Specifically, we pre-train a base model from a set of diverse lenses and then adapt it to a given lens by quickly refining the parameters.
arXiv Detail & Related papers (2021-04-07T12:00:38Z) - NuI-Go: Recursive Non-Local Encoder-Decoder Network for Retinal Image
Non-Uniform Illumination Removal [96.12120000492962]
The quality of retinal images is often clinically unsatisfactory due to eye lesions and imperfect imaging process.
One of the most challenging quality degradation issues in retinal images is non-uniform illumination.
We propose a non-uniform illumination removal network for retinal image, called NuI-Go.
arXiv Detail & Related papers (2020-08-07T04:31:33Z) - Modeling and Enhancing Low-quality Retinal Fundus Images [167.02325845822276]
Low-quality fundus images increase uncertainty in clinical observation and lead to the risk of misdiagnosis.
We propose a clinically oriented fundus enhancement network (cofe-Net) to suppress global degradation factors.
Experiments on both synthetic and real images demonstrate that our algorithm effectively corrects low-quality fundus images without losing retinal details.
arXiv Detail & Related papers (2020-05-12T08:01:16Z) - Rapid Whole Slide Imaging via Learning-based Two-shot Virtual
Autofocusing [57.90239401665367]
Whole slide imaging (WSI) is an emerging technology for digital pathology.
We propose the concept of textitvirtual autofocusing, which does not rely on mechanical adjustment to conduct refocusing.
arXiv Detail & Related papers (2020-03-14T13:40:33Z) - Self-Supervised Linear Motion Deblurring [112.75317069916579]
Deep convolutional neural networks are state-of-the-art for image deblurring.
We present a differentiable reblur model for self-supervised motion deblurring.
Our experiments demonstrate that self-supervised single image deblurring is really feasible.
arXiv Detail & Related papers (2020-02-10T20:15:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.