Building Trust in Virtual Immunohistochemistry: Automated Assessment of Image Quality
- URL: http://arxiv.org/abs/2511.04615v1
- Date: Thu, 06 Nov 2025 18:09:09 GMT
- Title: Building Trust in Virtual Immunohistochemistry: Automated Assessment of Image Quality
- Authors: Tushar Kataria, Shikha Dubey, Mary Bronner, Jolanta Jedrzkiewicz, Ben J. Brintz, Shireen Y. Elhabian, Beatrice S. Knudsen,
- Abstract summary: Deep learning models can generate virtualchemistry (IHC) stains from hematoxylin and eosin (H&E) images.<n>We introduce an automated and accuracy grounded framework to determine image quality across sixteen paired or unpaired image translation models.
- Score: 3.8391050162498135
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning models can generate virtual immunohistochemistry (IHC) stains from hematoxylin and eosin (H&E) images, offering a scalable and low-cost alternative to laboratory IHC. However, reliable evaluation of image quality remains a challenge as current texture- and distribution-based metrics quantify image fidelity rather than the accuracy of IHC staining. Here, we introduce an automated and accuracy grounded framework to determine image quality across sixteen paired or unpaired image translation models. Using color deconvolution, we generate masks of pixels stained brown (i.e., IHC-positive) as predicted by each virtual IHC model. We use the segmented masks of real and virtual IHC to compute stain accuracy metrics (Dice, IoU, Hausdorff distance) that directly quantify correct pixel - level labeling without needing expert manual annotations. Our results demonstrate that conventional image fidelity metrics, including Frechet Inception Distance (FID), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM), correlate poorly with stain accuracy and pathologist assessment. Paired models such as PyramidPix2Pix and AdaptiveNCE achieve the highest stain accuracy, whereas unpaired diffusion- and GAN-based models are less reliable in providing accurate IHC positive pixel labels. Moreover, whole-slide images (WSI) reveal performance declines that are invisible in patch-based evaluations, emphasizing the need for WSI-level benchmarks. Together, this framework defines a reproducible approach for assessing the quality of virtual IHC models, a critical step to accelerate translation towards routine use by pathologists.
Related papers
- X-Mark: Saliency-Guided Robust Dataset Ownership Verification for Medical Imaging [67.85884025186755]
High-quality medical imaging datasets are essential for training deep learning models, but their unauthorized use raises serious copyright and ethical concerns.<n>Medical imaging presents a unique challenge for existing dataset ownership verification methods designed for natural images.<n>We propose X-Mark, a sample-specific clean-label watermarking method for chest x-ray copyright protection.
arXiv Detail & Related papers (2026-02-10T00:03:43Z) - Biology-driven assessment of deep learning super-resolution imaging of the porosity network in dentin [3.6401695744986866]
The mechanosensory system of teeth is believed to partly rely on Odontoblast cells stimulation by fluid flow through a porosity network extending through dentin.<n>Visualizing the smallest sub-microscopic porosity vessels requires the highest achievable resolution from confocal fluorescence microscopy.<n>We tested different deep learning (DL) super-resolution (SR) models to allow faster experimental acquisitions of lower resolution images and restore optimal image quality by post-processing.
arXiv Detail & Related papers (2025-10-09T16:26:38Z) - From Pixels to Pathology: Restoration Diffusion for Diagnostic-Consistent Virtual IHC [37.284994932355865]
We introduce Star-Diff, a structure-aware staining restoration diffusion model that reformulates virtual staining as an image restoration task.<n>By combining residual and noise-based generation pathways, Star-Diff maintains tissue structure while modeling realistic biomarker variability.<n> Experiments on the BCI dataset demonstrate that Star-Diff achieves state-of-the-art (SOTA) performance in both visual fidelity and diagnostic relevance.
arXiv Detail & Related papers (2025-08-04T15:36:58Z) - ImplicitStainer: Data-Efficient Medical Image Translation for Virtual Antibody-based Tissue Staining Using Local Implicit Functions [1.9029890402585894]
Hematoxylin and eosin (H&E) staining is a gold standard for microscopic diagnosis in pathology.<n>Hematoxylin and eosin (H&E) staining is a gold standard for microscopic diagnosis in pathology.<n>Hematoxylin and eosin (H&E) staining does not capture all the diagnostic information that may be needed.
arXiv Detail & Related papers (2025-05-14T22:22:52Z) - SCFANet: Style Distribution Constraint Feature Alignment Network For Pathological Staining Translation [0.11999555634662631]
Style Distribution Constraint Feature Alignment Network (SCFANet)<n>SCFANet incorporates two innovative modules: the Style Distribution Constrainer (SDC) and Feature Alignment Learning (FAL)<n>Our SCFANet model outperforms existing methods, achieving precise transformation of H&E-stained images into their IHC-stained counterparts.
arXiv Detail & Related papers (2025-04-01T07:29:53Z) - Understanding and Improving Training-Free AI-Generated Image Detections with Vision Foundation Models [68.90917438865078]
Deepfake techniques for facial synthesis and editing pose serious risks for generative models.<n>In this paper, we investigate how detection performance varies across model backbones, types, and datasets.<n>We introduce Contrastive Blur, which enhances performance on facial images, and MINDER, which addresses noise type bias, balancing performance across domains.
arXiv Detail & Related papers (2024-11-28T13:04:45Z) - Enhanced Sharp-GAN For Histopathology Image Synthesis [63.845552349914186]
Histopathology image synthesis aims to address the data shortage issue in training deep learning approaches for accurate cancer detection.
We propose a novel approach that enhances the quality of synthetic images by using nuclei topology and contour regularization.
The proposed approach outperforms Sharp-GAN in all four image quality metrics on two datasets.
arXiv Detail & Related papers (2023-01-24T17:54:01Z) - Negligible effect of brain MRI data preprocessing for tumor segmentation [36.89606202543839]
We conduct experiments on three publicly available datasets and evaluate the effect of different preprocessing steps in deep neural networks.
Our results demonstrate that most popular standardization steps add no value to the network performance.
We suggest that image intensity normalization approaches do not contribute to model accuracy because of the reduction of signal variance with image standardization.
arXiv Detail & Related papers (2022-04-11T17:29:36Z) - Texture Characterization of Histopathologic Images Using Ecological
Diversity Measures and Discrete Wavelet Transform [82.53597363161228]
This paper proposes a method for characterizing texture across histopathologic images with a considerable success rate.
It is possible to quantify the intrinsic properties of such images with promising accuracy on two HI datasets.
arXiv Detail & Related papers (2022-02-27T02:19:09Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - StainNet: a fast and robust stain normalization network [0.7796684624647288]
This paper proposes a fast and robust stain normalization network with only 1.28K parameters named StainNet.
The proposed method performs well in stain normalization and achieves a better accuracy and image quality.
arXiv Detail & Related papers (2020-12-23T08:16:27Z) - Uncertainty-Aware Blind Image Quality Assessment in the Laboratory and
Wild [98.48284827503409]
We develop a textitunified BIQA model and an approach of training it for both synthetic and realistic distortions.
We employ the fidelity loss to optimize a deep neural network for BIQA over a large number of such image pairs.
Experiments on six IQA databases show the promise of the learned method in blindly assessing image quality in the laboratory and wild.
arXiv Detail & Related papers (2020-05-28T13:35:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.