Adaptive Supervised PatchNCE Loss for Learning H&E-to-IHC Stain
Translation with Inconsistent Groundtruth Image Pairs
- URL: http://arxiv.org/abs/2303.06193v1
- Date: Fri, 10 Mar 2023 19:56:34 GMT
- Title: Adaptive Supervised PatchNCE Loss for Learning H&E-to-IHC Stain
Translation with Inconsistent Groundtruth Image Pairs
- Authors: Fangda Li, Zhiqiang Hu, Wen Chen and Avinash Kak
- Abstract summary: We present a new loss function, Adaptive Supervised PatchNCE (ASP), to deal with the input to target inconsistencies in a proposed H&E-to-IHC image-to-image translation framework.
In our experiment, we demonstrate that our proposed method outperforms existing image-to-image translation methods for stain translation to multiple IHC stains.
- Score: 5.841841666625825
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Immunohistochemical (IHC) staining highlights the molecular information
critical to diagnostics in tissue samples. However, compared to H&E staining,
IHC staining can be much more expensive in terms of both labor and the
laboratory equipment required. This motivates recent research that demonstrates
that the correlations between the morphological information present in the
H&E-stained slides and the molecular information in the IHC-stained slides can
be used for H&E-to-IHC stain translation. However, due to a lack of
pixel-perfect H&E-IHC groundtruth pairs, most existing methods have resorted to
relying on expert annotations. To remedy this situation, we present a new loss
function, Adaptive Supervised PatchNCE (ASP), to directly deal with the input
to target inconsistencies in a proposed H&E-to-IHC image-to-image translation
framework. The ASP loss is built upon a patch-based contrastive learning
criterion, named Supervised PatchNCE (SP), and augments it further with weight
scheduling to mitigate the negative impact of noisy supervision. Lastly, we
introduce the Multi-IHC Stain Translation (MIST) dataset, which contains
aligned H&E-IHC patches for 4 different IHC stains critical to breast cancer
diagnosis. In our experiment, we demonstrate that our proposed method
outperforms existing image-to-image translation methods for stain translation
to multiple IHC stains. All of our code and datasets are available at
https://github.com/lifangda01/AdaptiveSupervisedPatchNCE.
Related papers
- DeReStainer: H&E to IHC Pathological Image Translation via Decoupled Staining Channels [10.321593505248341]
We propose a destain-restain framework for converting H&E staining to IHC staining.
We further design loss functions specifically for Hematoxylin and Diaminobenzidin (DAB) channels to generate IHC images.
arXiv Detail & Related papers (2024-09-01T07:56:33Z) - Advancing H&E-to-IHC Stain Translation in Breast Cancer: A Multi-Magnification and Attention-Based Approach [13.88935300094334]
We propose a novel model integrating attention mechanisms and multi-magnification information processing.
Our model employs a multi-magnification processing strategy to extract and utilize information from various magnifications within pathology images.
Rigorous testing on a publicly available breast cancer dataset demonstrates superior performance compared to existing methods.
arXiv Detail & Related papers (2024-08-04T04:55:10Z) - VIMs: Virtual Immunohistochemistry Multiplex staining via Text-to-Stain Diffusion Trained on Uniplex Stains [0.9920087186610302]
IHC stains are crucial in pathology practice for resolving complex diagnostic questions and guiding patient treatment decisions.
Small biopsies often lack sufficient tissue for multiple stains while preserving material for subsequent molecular testing.
VIMs is the first model to address this need, leveraging a large vision-language single-step diffusion model for virtual IHC multiplexing.
arXiv Detail & Related papers (2024-07-26T22:23:45Z) - Mix-Domain Contrastive Learning for Unpaired H&E-to-IHC Stain Translation [14.719264181466766]
We propose a Mix-Domain Contrastive Learning method to leverage the supervision information in unpaired H&E-to-IHC stain translation.
With the mix-domain pathology information aggregation, MDCL enhances the pathological consistency between the corresponding patches and the component discrepancy of the patches from the different positions of the generated IHC image.
arXiv Detail & Related papers (2024-06-17T17:47:44Z) - IHC Matters: Incorporating IHC analysis to H&E Whole Slide Image Analysis for Improved Cancer Grading via Two-stage Multimodal Bilinear Pooling Fusion [19.813558168408047]
We show that IHC and H&E possess distinct advantages and disadvantages while possessing certain complementary qualities.
We develop a two-stage multi-modal bilinear model with a feature pooling module.
Experiments demonstrate that incorporating IHC data into machine learning models, alongside H&E stained images, leads to superior predictive results for cancer grading.
arXiv Detail & Related papers (2024-05-13T21:21:44Z) - Improving Vision Anomaly Detection with the Guidance of Language
Modality [64.53005837237754]
This paper tackles the challenges for vision modality from a multimodal point of view.
We propose Cross-modal Guidance (CMG) to tackle the redundant information issue and sparse space issue.
To learn a more compact latent space for the vision anomaly detector, CMLE learns a correlation structure matrix from the language modality.
arXiv Detail & Related papers (2023-10-04T13:44:56Z) - Stain-invariant self supervised learning for histopathology image
analysis [74.98663573628743]
We present a self-supervised algorithm for several classification tasks within hematoxylin and eosin stained images of breast cancer.
Our method achieves the state-of-the-art performance on several publicly available breast cancer datasets.
arXiv Detail & Related papers (2022-11-14T18:16:36Z) - Texture Characterization of Histopathologic Images Using Ecological
Diversity Measures and Discrete Wavelet Transform [82.53597363161228]
This paper proposes a method for characterizing texture across histopathologic images with a considerable success rate.
It is possible to quantify the intrinsic properties of such images with promising accuracy on two HI datasets.
arXiv Detail & Related papers (2022-02-27T02:19:09Z) - Categorical Relation-Preserving Contrastive Knowledge Distillation for
Medical Image Classification [75.27973258196934]
We propose a novel Categorical Relation-preserving Contrastive Knowledge Distillation (CRCKD) algorithm, which takes the commonly used mean-teacher model as the supervisor.
With this regularization, the feature distribution of the student model shows higher intra-class similarity and inter-class variance.
With the contribution of the CCD and CRP, our CRCKD algorithm can distill the relational knowledge more comprehensively.
arXiv Detail & Related papers (2021-07-07T13:56:38Z) - Dual-Consistency Semi-Supervised Learning with Uncertainty
Quantification for COVID-19 Lesion Segmentation from CT Images [49.1861463923357]
We propose an uncertainty-guided dual-consistency learning network (UDC-Net) for semi-supervised COVID-19 lesion segmentation from CT images.
Our proposed UDC-Net improves the fully supervised method by 6.3% in Dice and outperforms other competitive semi-supervised approaches by significant margins.
arXiv Detail & Related papers (2021-04-07T16:23:35Z) - Alleviating the Incompatibility between Cross Entropy Loss and Episode
Training for Few-shot Skin Disease Classification [76.89093364969253]
We propose to apply Few-Shot Learning to skin disease identification to address the extreme scarcity of training sample problem.
Based on a detailed analysis, we propose the Query-Relative (QR) loss, which proves superior to Cross Entropy (CE) under episode training.
We further strengthen the proposed QR loss with a novel adaptive hard margin strategy.
arXiv Detail & Related papers (2020-04-21T00:57:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.