Cross-Modality Learning for Predicting IHC Biomarkers from H&E-Stained Whole-Slide Images
- URL: http://arxiv.org/abs/2506.15853v1
- Date: Wed, 18 Jun 2025 20:01:14 GMT
- Title: Cross-Modality Learning for Predicting IHC Biomarkers from H&E-Stained Whole-Slide Images
- Authors: Amit Das, Naofumi Tomita, Kyle J. Syme, Weijie Ma, Paige O'Connor, Kristin N. Corbett, Bing Ren, Xiaoying Liu, Saeed Hassanpour,
- Abstract summary: HistoStainAlign predicts IHC staining patterns directly from H&E whole-slide images.<n>HistoStainAlign achieved weighted F1 scores of 0.735 [95% Confidence Interval (CI): 0.670-0.799], 0.830 [95% CI: 0.772-0.886], and 0.723 [95% CI: 0.607-0.836], respectively for these three IHC stains.
- Score: 4.650292435891902
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hematoxylin and Eosin (H&E) staining is a cornerstone of pathological analysis, offering reliable visualization of cellular morphology and tissue architecture for cancer diagnosis, subtyping, and grading. Immunohistochemistry (IHC) staining provides molecular insights by detecting specific proteins within tissues, enhancing diagnostic accuracy, and improving treatment planning. However, IHC staining is costly, time-consuming, and resource-intensive, requiring specialized expertise. To address these limitations, this study proposes HistoStainAlign, a novel deep learning framework that predicts IHC staining patterns directly from H&E whole-slide images (WSIs) by learning joint representations of morphological and molecular features. The framework integrates paired H&E and IHC embeddings through a contrastive training strategy, capturing complementary features across staining modalities without patch-level annotations or tissue registration. The model was evaluated on gastrointestinal and lung tissue WSIs with three commonly used IHC stains: P53, PD-L1, and Ki-67. HistoStainAlign achieved weighted F1 scores of 0.735 [95% Confidence Interval (CI): 0.670-0.799], 0.830 [95% CI: 0.772-0.886], and 0.723 [95% CI: 0.607-0.836], respectively for these three IHC stains. Embedding analyses demonstrated the robustness of the contrastive alignment in capturing meaningful cross-stain relationships. Comparisons with a baseline model further highlight the advantage of incorporating contrastive learning for improved stain pattern prediction. This study demonstrates the potential of computational approaches to serve as a pre-screening tool, helping prioritize cases for IHC staining and improving workflow efficiency.
Related papers
- A Study of Anatomical Priors for Deep Learning-Based Segmentation of Pheochromocytoma in Abdominal CT [3.2784582049471505]
This study systematically evaluates anatomical priors to identify configurations that improve deep learning-based PCC segmentation.<n>We employed the nnU-Net framework to evaluate eleven annotation strategies for accurate 3D segmentation of pheochromocytoma.
arXiv Detail & Related papers (2025-07-21T02:35:29Z) - Score-based Diffusion Model for Unpaired Virtual Histology Staining [7.648204151998162]
Hematoxylin and eosin (H&E) staining visualizes histology but lacks specificity for diagnostic markers.<n>Hematoxylin and eosin (H&E) staining provides protein-targeted staining but is restricted by tissue availability and antibody specificity.<n>Virtual staining, i.e., translating the H&E image to its IHC counterpart while preserving tissue structure, is promising for efficient IHC generation.<n>This study proposes a mutual-information (MI)-guided score-based diffusion model for unpaired virtual staining.
arXiv Detail & Related papers (2025-06-29T11:02:45Z) - Predicting Neoadjuvant Chemotherapy Response in Triple-Negative Breast Cancer Using Pre-Treatment Histopathologic Images [2.23127246021293]
Triple-negative breast cancer (TNBC) remains a major clinical challenge due to its aggressive behavior and lack of targeted therapies.<n>We present an attention-based multiple instance learning framework designed to predict pathologic complete response (pCR) directly from pre-treatment hematoxylin and eosin (H&E)-stained biopsy slides.
arXiv Detail & Related papers (2025-05-20T02:06:34Z) - Artificial Intelligence-Assisted Prostate Cancer Diagnosis for Reduced Use of Immunohistochemistry [0.03775355948495808]
We evaluate an AI model's ability to minimize IHC use without compromising diagnostic accuracy.<n>Applying sensitivity-prioritized diagnostic thresholds reduced the need for IHC staining by 44.4%, 42.0%, and 20.7% in the three cohorts investigated.
arXiv Detail & Related papers (2025-03-31T08:54:57Z) - Integrating Deep Learning with Fundus and Optical Coherence Tomography for Cardiovascular Disease Prediction [47.7045293755736]
Early identification of patients at risk of cardiovascular diseases (CVD) is crucial for effective preventive care, reducing healthcare burden, and improving patients' quality of life.
This study demonstrates the potential of retinal optical coherence tomography ( OCT) imaging combined with fundus photographs for identifying future adverse cardiac events.
We propose a novel binary classification network based on a Multi-channel Variational Autoencoder (MCVAE), which learns a latent embedding of patients' fundus and OCT images to classify individuals into two groups: those likely to develop CVD in the future and those who are not.
arXiv Detail & Related papers (2024-10-18T12:37:51Z) - Improved Esophageal Varices Assessment from Non-Contrast CT Scans [15.648325577912608]
Esophageal varices (EV) is a serious health concern resulting from portal hypertension.
Despite non-contrast computed tomography (NC-CT) imaging being a less expensive and non-invasive imaging modality, it has yet to gain full acceptance as a primary clinical diagnostic tool for EV evaluation.
We present the Multi-Organ-cOhesion-Network (MOON), a novel framework enhancing the analysis of critical organ features in NC-CT scans for effective assessment of EV.
arXiv Detail & Related papers (2024-07-18T06:49:10Z) - AXIAL: Attention-based eXplainability for Interpretable Alzheimer's Localized Diagnosis using 2D CNNs on 3D MRI brain scans [43.06293430764841]
This study presents an innovative method for Alzheimer's disease diagnosis using 3D MRI designed to enhance the explainability of model decisions.
Our approach adopts a soft attention mechanism, enabling 2D CNNs to extract volumetric representations.
With voxel-level precision, our method identified which specific areas are being paid attention to, identifying these predominant brain regions.
arXiv Detail & Related papers (2024-07-02T16:44:00Z) - TACCO: Task-guided Co-clustering of Clinical Concepts and Patient Visits for Disease Subtyping based on EHR Data [42.96821770394798]
TACCO is a novel framework that jointly discovers clusters of clinical concepts and patient visits based on a hypergraph modeling of EHR data.
We conduct experiments on the public MIMIC-III dataset and Emory internal CRADLE dataset over the downstream clinical tasks of phenotype classification and cardiovascular risk prediction.
In-depth model analysis, clustering results analysis, and clinical case studies further validate the improved utilities and insightful interpretations delivered by TACCO.
arXiv Detail & Related papers (2024-06-14T14:18:38Z) - IHC Matters: Incorporating IHC analysis to H&E Whole Slide Image Analysis for Improved Cancer Grading via Two-stage Multimodal Bilinear Pooling Fusion [19.813558168408047]
We show that IHC and H&E possess distinct advantages and disadvantages while possessing certain complementary qualities.
We develop a two-stage multi-modal bilinear model with a feature pooling module.
Experiments demonstrate that incorporating IHC data into machine learning models, alongside H&E stained images, leads to superior predictive results for cancer grading.
arXiv Detail & Related papers (2024-05-13T21:21:44Z) - CIMIL-CRC: a clinically-informed multiple instance learning framework for patient-level colorectal cancer molecular subtypes classification from H\&E stained images [42.771819949806655]
We introduce CIMIL-CRC', a framework that solves the MSI/MSS MIL problem by efficiently combining a pre-trained feature extraction model with principal component analysis (PCA) to aggregate information from all patches.
We assessed our CIMIL-CRC method using the average area under the curve (AUC) from a 5-fold cross-validation experimental setup for model development on the TCGA-CRC-DX cohort.
arXiv Detail & Related papers (2024-01-29T12:56:11Z) - Evaluating Deep Learning-based Melanoma Classification using
Immunohistochemistry and Routine Histology: A Three Center Study [1.4053129774629076]
Pathologists routinely use hematoxylin and eosin (H&E)-stained tissue slides against MelanA.
DL MelanA-based assistance systems show the same performance as the benchmark H&E classification.
arXiv Detail & Related papers (2023-09-07T06:09:12Z) - Self-supervised contrastive learning of echocardiogram videos enables
label-efficient cardiac disease diagnosis [48.64462717254158]
We developed a self-supervised contrastive learning approach, EchoCLR, to catered to echocardiogram videos.
When fine-tuned on small portions of labeled data, EchoCLR pretraining significantly improved classification performance for left ventricular hypertrophy (LVH) and aortic stenosis (AS)
EchoCLR is unique in its ability to learn representations of medical videos and demonstrates that SSL can enable label-efficient disease classification from small, labeled datasets.
arXiv Detail & Related papers (2022-07-23T19:17:26Z) - Deep learning-based transformation of the H&E stain into special stains [44.38127957263123]
We show the utility of supervised learning-based computational stain transformation from H&E to different special stains using tissue sections from kidney needle core biopsies.
Results: The quality of the special stains generated by the stain transformation network was statistically equivalent to those generated through standard histochemical staining.
arXiv Detail & Related papers (2020-08-20T10:12:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.