RandStainNA: Learning Stain-Agnostic Features from Histology Slides by
Bridging Stain Augmentation and Normalization
- URL: http://arxiv.org/abs/2206.12694v1
- Date: Sat, 25 Jun 2022 16:43:59 GMT
- Title: RandStainNA: Learning Stain-Agnostic Features from Histology Slides by
Bridging Stain Augmentation and Normalization
- Authors: Yiqing Shen, Yulin Luo, Dinggang Shen, Jing Ke
- Abstract summary: Two proposals, namely stain normalization (SN) and stain augmentation (SA), have been spotlighted to reduce the generalization error.
To address the problems, we unify SN and SA with a novel RandStainNA scheme.
The RandStainNA constrains variable stain styles in a practicable range to train a stain agnostic deep learning model.
- Score: 45.81689497433507
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Stain variations often decrease the generalization ability of deep learning
based approaches in digital histopathology analysis. Two separate proposals,
namely stain normalization (SN) and stain augmentation (SA), have been
spotlighted to reduce the generalization error, where the former alleviates the
stain shift across different medical centers using template image and the
latter enriches the accessible stain styles by the simulation of more stain
variations. However, their applications are bounded by the selection of
template images and the construction of unrealistic styles. To address the
problems, we unify SN and SA with a novel RandStainNA scheme, which constrains
variable stain styles in a practicable range to train a stain agnostic deep
learning model. The RandStainNA is applicable to stain normalization in a
collection of color spaces i.e. HED, HSV, LAB. Additionally, we propose a
random color space selection scheme to gain extra performance improvement. We
evaluate our method by two diagnostic tasks i.e. tissue subtype classification
and nuclei segmentation, with various network backbones. The performance
superiority over both SA and SN yields that the proposed RandStainNA can
consistently improve the generalization ability, that our models can cope with
more incoming clinical datasets with unpredicted stain styles. The codes is
available at https://github.com/yiqings/RandStainNA.
Related papers
- Unsupervised Latent Stain Adaptation for Computational Pathology [2.483372684394528]
Stain adaptation aims to reduce the generalization error between different stains by training a model on source stains that generalizes to target stains.
We propose a joint training between artificially labeled and unlabeled data including all available stained images called Unsupervised Latent Stain Adaptation (ULSA)
Our method uses stain translation to enrich labeled source images with synthetic target images in order to increase the supervised signals.
arXiv Detail & Related papers (2024-06-27T11:08:42Z) - Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images [68.42215385041114]
This paper introduces a novel lightweight multi-level adaptation and comparison framework to repurpose the CLIP model for medical anomaly detection.
Our approach integrates multiple residual adapters into the pre-trained visual encoder, enabling a stepwise enhancement of visual features across different levels.
Our experiments on medical anomaly detection benchmarks demonstrate that our method significantly surpasses current state-of-the-art models.
arXiv Detail & Related papers (2024-03-19T09:28:19Z) - DARC: Distribution-Aware Re-Coloring Model for Generalizable Nucleus
Segmentation [68.43628183890007]
We argue that domain gaps can also be caused by different foreground (nucleus)-background ratios.
First, we introduce a re-coloring method that relieves dramatic image color variations between different domains.
Second, we propose a new instance normalization method that is robust to the variation in the foreground-background ratios.
arXiv Detail & Related papers (2023-09-01T01:01:13Z) - A Laplacian Pyramid Based Generative H&E Stain Augmentation Network [5.841841666625825]
Generative Stain Augmentation Network (G-SAN) is a GAN-based framework that augments a collection of cell images with simulated stain variations.
Using G-SAN-augmented training data provides on average 15.7% improvement in F1 score and 7.3% improvement in panoptic quality.
arXiv Detail & Related papers (2023-05-23T17:43:18Z) - Stain-invariant self supervised learning for histopathology image
analysis [74.98663573628743]
We present a self-supervised algorithm for several classification tasks within hematoxylin and eosin stained images of breast cancer.
Our method achieves the state-of-the-art performance on several publicly available breast cancer datasets.
arXiv Detail & Related papers (2022-11-14T18:16:36Z) - Improving Deep Facial Phenotyping for Ultra-rare Disorder Verification
Using Model Ensembles [52.77024349608834]
We analyze the influence of replacing a DCNN with a state-of-the-art face recognition approach, iResNet with ArcFace.
Our proposed ensemble model achieves state-of-the-art performance on both seen and unseen disorders.
arXiv Detail & Related papers (2022-11-12T23:28:54Z) - HistoStarGAN: A Unified Approach to Stain Normalisation, Stain Transfer
and Stain Invariant Segmentation in Renal Histopathology [0.5505634045241288]
HistoStarGAN is a unified framework that performs stain transfer between multiple stainings.
It can serve as a synthetic data generator, which paves the way for the use of fully annotated synthetic image data.
arXiv Detail & Related papers (2022-10-18T12:22:26Z) - Stain-Adaptive Self-Supervised Learning for Histopathology Image
Analysis [3.8073142980733]
We propose a novel Stain-Adaptive Self-Supervised Learning(SASSL) method for histopathology image analysis.
Our SASSL integrates a domain-adversarial training module into the SSL framework to learn distinctive features that are robust to both various transformations and stain variations.
Experimental results demonstrate that the proposed method can robustly improve the feature extraction ability of the model.
arXiv Detail & Related papers (2022-08-08T09:54:46Z) - Structure-Preserving Multi-Domain Stain Color Augmentation using
Style-Transfer with Disentangled Representations [0.9051352746190446]
HistAuGAN can simulate a wide variety of realistic histology stain colors, thus making neural networks stain-invariant when applied during training.
Based on a generative adversarial network (GAN) for image-to-image translation, our model disentangles the content of the image, i.e., the morphological tissue structure, from the stain color attributes.
It can be trained on multiple domains and, therefore, learns to cover different stain colors as well as other domain-specific variations introduced in the slide preparation and imaging process.
arXiv Detail & Related papers (2021-07-26T17:52:39Z) - Black-Box Diagnosis and Calibration on GAN Intra-Mode Collapse: A Pilot
Study [116.05514467222544]
Generative adversarial networks (GANs) nowadays are capable of producing images of incredible realism.
One concern raised is whether the state-of-the-art GAN's learned distribution still suffers from mode collapse.
This paper explores to diagnose GAN intra-mode collapse and calibrate that, in a novel black-box setting.
arXiv Detail & Related papers (2021-07-23T06:03:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.