Single color virtual H&E staining with In-and-Out Net
- URL: http://arxiv.org/abs/2405.13278v1
- Date: Wed, 22 May 2024 01:17:27 GMT
- Title: Single color virtual H&E staining with In-and-Out Net
- Authors: Mengkun Chen, Yen-Tung Liu, Fadeel Sher Khan, Matthew C. Fox, Jason S. Reichenberg, Fabiana C. P. S. Lopes, Katherine R. Sebastian, Mia K. Markey, James W. Tunnell,
- Abstract summary: This paper introduces a novel network, In-and-Out Net, specifically designed for virtual staining tasks.
Based on Generative Adversarial Networks (GAN), our model efficiently transforms Reflectance Confocal Microscopy (RCM) images into Hematoxylin and Eosin stained images.
- Score: 0.8271394038014485
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Virtual staining streamlines traditional staining procedures by digitally generating stained images from unstained or differently stained images. While conventional staining methods involve time-consuming chemical processes, virtual staining offers an efficient and low infrastructure alternative. Leveraging microscopy-based techniques, such as confocal microscopy, researchers can expedite tissue analysis without the need for physical sectioning. However, interpreting grayscale or pseudo-color microscopic images remains a challenge for pathologists and surgeons accustomed to traditional histologically stained images. To fill this gap, various studies explore digitally simulating staining to mimic targeted histological stains. This paper introduces a novel network, In-and-Out Net, specifically designed for virtual staining tasks. Based on Generative Adversarial Networks (GAN), our model efficiently transforms Reflectance Confocal Microscopy (RCM) images into Hematoxylin and Eosin (H&E) stained images. We enhance nuclei contrast in RCM images using aluminum chloride preprocessing for skin tissues. Training the model with virtual H\&E labels featuring two fluorescence channels eliminates the need for image registration and provides pixel-level ground truth. Our contributions include proposing an optimal training strategy, conducting a comparative analysis demonstrating state-of-the-art performance, validating the model through an ablation study, and collecting perfectly matched input and ground truth images without registration. In-and-Out Net showcases promising results, offering a valuable tool for virtual staining tasks and advancing the field of histological image analysis.
Related papers
- Super-resolved virtual staining of label-free tissue using diffusion models [2.8661150986074384]
This study presents a diffusion model-based super-resolution virtual staining approach utilizing a Brownian bridge process.
Our approach integrates novel sampling techniques into a diffusion model-based image inference process.
Blindly applied to lower-resolution auto-fluorescence images of label-free human lung tissue samples, the diffusion-based super-resolution virtual staining model consistently outperformed conventional approaches in resolution, structural similarity and perceptual accuracy.
arXiv Detail & Related papers (2024-10-26T04:31:17Z) - Generating Seamless Virtual Immunohistochemical Whole Slide Images with Content and Color Consistency [2.063403009505468]
Immunohistochemical (IHC) stains play a vital role in a pathologist's analysis of medical images, providing crucial diagnostic information for various diseases.
Virtual staining from hematoxylin and eosin (H&E)-stained whole slide images (WSIs) allows the automatic production of other useful IHC stains without the expensive physical staining process.
Current virtual WSI generation methods based on tile-wise processing often suffer from inconsistencies in content, texture, and color at tile boundaries.
We propose a novel consistent WSI synthesis network, CC-WSI-Net, that extends GAN models to
arXiv Detail & Related papers (2024-10-01T21:02:16Z) - Autonomous Quality and Hallucination Assessment for Virtual Tissue Staining and Digital Pathology [0.11728348229595655]
We present an autonomous quality and hallucination assessment method (termed AQuA) for virtual tissue staining.
AQuA achieves 99.8% accuracy when detecting acceptable and unacceptable virtually stained tissue images.
arXiv Detail & Related papers (2024-04-29T06:32:28Z) - Digital staining in optical microscopy using deep learning -- a review [47.86254766044832]
Digital staining has emerged as a promising concept to use modern deep learning for the translation from optical contrast to established biochemical contrast of actual stainings.
We provide an in-depth analysis of the current state-of-the-art in this field, suggest methods of good practice, identify pitfalls and challenges and postulate promising advances towards potential future implementations and applications.
arXiv Detail & Related papers (2023-03-14T15:23:48Z) - Unsupervised Deep Digital Staining For Microscopic Cell Images Via
Knowledge Distillation [46.006296303296544]
It is difficult to obtain large-scale stained/unstained cell image pairs in practice.
We propose a novel unsupervised deep learning framework for the digital staining of cell images.
We show that the proposed unsupervised deep staining method can generate stained images with more accurate positions and shapes of the cell targets.
arXiv Detail & Related papers (2023-03-03T16:26:38Z) - Stain-invariant self supervised learning for histopathology image
analysis [74.98663573628743]
We present a self-supervised algorithm for several classification tasks within hematoxylin and eosin stained images of breast cancer.
Our method achieves the state-of-the-art performance on several publicly available breast cancer datasets.
arXiv Detail & Related papers (2022-11-14T18:16:36Z) - Virtual stain transfer in histology via cascaded deep neural networks [2.309018557701645]
We demonstrate a virtual stain transfer framework via a cascaded deep neural network (C-DNN)
Unlike a single neural network structure which only takes one stain type as input to digitally output images of another stain type, C-DNN first uses virtual staining to transform autofluorescence microscopy images into H&E.
We successfully transferred the H&E-stained tissue images into virtual PAS (periodic acid-Schiff) stain.
arXiv Detail & Related papers (2022-07-14T00:43:18Z) - Lymphocyte Classification in Hyperspectral Images of Ovarian Cancer
Tissue Biopsy Samples [94.37521840642141]
We present a machine learning pipeline to segment white blood cell pixels in hyperspectral images of biopsy cores.
These cells are clinically important for diagnosis, but some prior work has struggled to incorporate them due to difficulty obtaining precise pixel labels.
arXiv Detail & Related papers (2022-03-23T00:58:27Z) - Texture Characterization of Histopathologic Images Using Ecological
Diversity Measures and Discrete Wavelet Transform [82.53597363161228]
This paper proposes a method for characterizing texture across histopathologic images with a considerable success rate.
It is possible to quantify the intrinsic properties of such images with promising accuracy on two HI datasets.
arXiv Detail & Related papers (2022-02-27T02:19:09Z) - Stain Style Transfer of Histopathology Images Via Structure-Preserved
Generative Learning [31.254432319814864]
This study proposes two stain style transfer models, SSIM-GAN and DSCSI-GAN, based on the generative adversarial networks.
By cooperating structural preservation metrics and feedback of an auxiliary diagnosis net in learning, medical-relevant information is preserved in color-normalized images.
arXiv Detail & Related papers (2020-07-24T15:30:19Z) - Retinopathy of Prematurity Stage Diagnosis Using Object Segmentation and
Convolutional Neural Networks [68.96150598294072]
Retinopathy of Prematurity (ROP) is an eye disorder primarily affecting premature infants with lower weights.
It causes proliferation of vessels in the retina and could result in vision loss and, eventually, retinal detachment, leading to blindness.
In recent years, there has been a significant effort to automate the diagnosis using deep learning.
This paper builds upon the success of previous models and develops a novel architecture, which combines object segmentation and convolutional neural networks (CNN)
Our proposed system first trains an object segmentation model to identify the demarcation line at a pixel level and adds the resulting mask as an additional "color" channel in
arXiv Detail & Related papers (2020-04-03T14:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.