Autonomous Quality and Hallucination Assessment for Virtual Tissue Staining and Digital Pathology
- URL: http://arxiv.org/abs/2404.18458v1
- Date: Mon, 29 Apr 2024 06:32:28 GMT
- Title: Autonomous Quality and Hallucination Assessment for Virtual Tissue Staining and Digital Pathology
- Authors: Luzhe Huang, Yuzhu Li, Nir Pillar, Tal Keidar Haran, William Dean Wallace, Aydogan Ozcan,
- Abstract summary: We present an autonomous quality and hallucination assessment method (termed AQuA) for virtual tissue staining.
AQuA achieves 99.8% accuracy when detecting acceptable and unacceptable virtually stained tissue images.
- Score: 0.11728348229595655
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Histopathological staining of human tissue is essential in the diagnosis of various diseases. The recent advances in virtual tissue staining technologies using AI alleviate some of the costly and tedious steps involved in the traditional histochemical staining process, permitting multiplexed rapid staining of label-free tissue without using staining reagents, while also preserving tissue. However, potential hallucinations and artifacts in these virtually stained tissue images pose concerns, especially for the clinical utility of these approaches. Quality assessment of histology images is generally performed by human experts, which can be subjective and depends on the training level of the expert. Here, we present an autonomous quality and hallucination assessment method (termed AQuA), mainly designed for virtual tissue staining, while also being applicable to histochemical staining. AQuA achieves 99.8% accuracy when detecting acceptable and unacceptable virtually stained tissue images without access to ground truth, also presenting an agreement of 98.5% with the manual assessments made by board-certified pathologists. Besides, AQuA achieves super-human performance in identifying realistic-looking, virtually stained hallucinatory images that would normally mislead human diagnosticians by deceiving them into diagnosing patients that never existed. We further demonstrate the wide adaptability of AQuA across various virtually and histochemically stained tissue images and showcase its strong external generalization to detect unseen hallucination patterns of virtual staining network models as well as artifacts observed in the traditional histochemical staining workflow. This framework creates new opportunities to enhance the reliability of virtual staining and will provide quality assurance for various image generation and transformation tasks in digital pathology and computational imaging.
Related papers
- FairSkin: Fair Diffusion for Skin Disease Image Generation [54.29840149709033]
Diffusion Model (DM) has become a leading method in generating synthetic medical images, but it suffers from a critical twofold bias.
We propose FairSkin, a novel DM framework that mitigates these biases through a three-level resampling mechanism.
Our approach significantly improves the diversity and quality of generated images, contributing to more equitable skin disease detection in clinical settings.
arXiv Detail & Related papers (2024-10-29T21:37:03Z) - Generating Seamless Virtual Immunohistochemical Whole Slide Images with Content and Color Consistency [2.063403009505468]
Immunohistochemical (IHC) stains play a vital role in a pathologist's analysis of medical images, providing crucial diagnostic information for various diseases.
Virtual staining from hematoxylin and eosin (H&E)-stained whole slide images (WSIs) allows the automatic production of other useful IHC stains without the expensive physical staining process.
Current virtual WSI generation methods based on tile-wise processing often suffer from inconsistencies in content, texture, and color at tile boundaries.
We propose a novel consistent WSI synthesis network, CC-WSI-Net, that extends GAN models to
arXiv Detail & Related papers (2024-10-01T21:02:16Z) - StealthDiffusion: Towards Evading Diffusion Forensic Detection through Diffusion Model [62.25424831998405]
StealthDiffusion is a framework that modifies AI-generated images into high-quality, imperceptible adversarial examples.
It is effective in both white-box and black-box settings, transforming AI-generated images into high-quality adversarial forgeries.
arXiv Detail & Related papers (2024-08-11T01:22:29Z) - Single color virtual H&E staining with In-and-Out Net [0.8271394038014485]
This paper introduces a novel network, In-and-Out Net, specifically designed for virtual staining tasks.
Based on Generative Adversarial Networks (GAN), our model efficiently transforms Reflectance Confocal Microscopy (RCM) images into Hematoxylin and Eosin stained images.
arXiv Detail & Related papers (2024-05-22T01:17:27Z) - Virtual histological staining of unlabeled autopsy tissue [1.9351365037275405]
We show that a trained neural network can transform autofluorescence images of label-free autopsy tissue sections into brightfield equivalent images that match hematoxylin and eosin stained versions of the same samples.
Our virtual autopsy staining technique can also be extended to necrotic tissue, and can rapidly and cost-effectively generate artifact-free H&E stains despite severe autolysis and cell death.
arXiv Detail & Related papers (2023-08-02T03:31:22Z) - Automated Whole Slide Imaging for Label-Free Histology using Photon
Absorption Remote Sensing Microscopy [0.0]
Current staining and advanced labeling methods are often destructive and mutually incompatible.
We present an alternative label-free histology platform using the first transmission-mode Photon Remote Sensing microscope.
arXiv Detail & Related papers (2023-04-26T12:36:19Z) - Digital staining in optical microscopy using deep learning -- a review [47.86254766044832]
Digital staining has emerged as a promising concept to use modern deep learning for the translation from optical contrast to established biochemical contrast of actual stainings.
We provide an in-depth analysis of the current state-of-the-art in this field, suggest methods of good practice, identify pitfalls and challenges and postulate promising advances towards potential future implementations and applications.
arXiv Detail & Related papers (2023-03-14T15:23:48Z) - Virtual stain transfer in histology via cascaded deep neural networks [2.309018557701645]
We demonstrate a virtual stain transfer framework via a cascaded deep neural network (C-DNN)
Unlike a single neural network structure which only takes one stain type as input to digitally output images of another stain type, C-DNN first uses virtual staining to transform autofluorescence microscopy images into H&E.
We successfully transferred the H&E-stained tissue images into virtual PAS (periodic acid-Schiff) stain.
arXiv Detail & Related papers (2022-07-14T00:43:18Z) - Texture Characterization of Histopathologic Images Using Ecological
Diversity Measures and Discrete Wavelet Transform [82.53597363161228]
This paper proposes a method for characterizing texture across histopathologic images with a considerable success rate.
It is possible to quantify the intrinsic properties of such images with promising accuracy on two HI datasets.
arXiv Detail & Related papers (2022-02-27T02:19:09Z) - Assessing glaucoma in retinal fundus photographs using Deep Feature
Consistent Variational Autoencoders [63.391402501241195]
glaucoma is challenging to detect since it remains asymptomatic until the symptoms are severe.
Early identification of glaucoma is generally made based on functional, structural, and clinical assessments.
Deep learning methods have partially solved this dilemma by bypassing the marker identification stage and analyzing high-level information directly to classify the data.
arXiv Detail & Related papers (2021-10-04T16:06:49Z) - Modeling and Enhancing Low-quality Retinal Fundus Images [167.02325845822276]
Low-quality fundus images increase uncertainty in clinical observation and lead to the risk of misdiagnosis.
We propose a clinically oriented fundus enhancement network (cofe-Net) to suppress global degradation factors.
Experiments on both synthetic and real images demonstrate that our algorithm effectively corrects low-quality fundus images without losing retinal details.
arXiv Detail & Related papers (2020-05-12T08:01:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.