Structure-Preserving Multi-Domain Stain Color Augmentation using
Style-Transfer with Disentangled Representations
- URL: http://arxiv.org/abs/2107.12357v1
- Date: Mon, 26 Jul 2021 17:52:39 GMT
- Title: Structure-Preserving Multi-Domain Stain Color Augmentation using
Style-Transfer with Disentangled Representations
- Authors: Sophia J. Wagner, Nadieh Khalili, Raghav Sharma, Melanie Boxberg,
Carsten Marr, Walter de Back, Tingying Peng
- Abstract summary: HistAuGAN can simulate a wide variety of realistic histology stain colors, thus making neural networks stain-invariant when applied during training.
Based on a generative adversarial network (GAN) for image-to-image translation, our model disentangles the content of the image, i.e., the morphological tissue structure, from the stain color attributes.
It can be trained on multiple domains and, therefore, learns to cover different stain colors as well as other domain-specific variations introduced in the slide preparation and imaging process.
- Score: 0.9051352746190446
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In digital pathology, different staining procedures and scanners cause
substantial color variations in whole-slide images (WSIs), especially across
different laboratories. These color shifts result in a poor generalization of
deep learning-based methods from the training domain to external pathology
data. To increase test performance, stain normalization techniques are used to
reduce the variance between training and test domain. Alternatively, color
augmentation can be applied during training leading to a more robust model
without the extra step of color normalization at test time. We propose a novel
color augmentation technique, HistAuGAN, that can simulate a wide variety of
realistic histology stain colors, thus making neural networks stain-invariant
when applied during training. Based on a generative adversarial network (GAN)
for image-to-image translation, our model disentangles the content of the
image, i.e., the morphological tissue structure, from the stain color
attributes. It can be trained on multiple domains and, therefore, learns to
cover different stain colors as well as other domain-specific variations
introduced in the slide preparation and imaging process. We demonstrate that
HistAuGAN outperforms conventional color augmentation techniques on a
classification task on the publicly available dataset Camelyon17 and show that
it is able to mitigate present batch effects.
Related papers
- Stain-Invariant Representation for Tissue Classification in Histology Images [1.1624569521079424]
We propose a framework that generates stain-augmented versions of the training images using stain perturbation matrix.
We evaluate the performance of the proposed model on cross-domain multi-class tissue type classification of colorectal cancer images.
arXiv Detail & Related papers (2024-11-21T23:50:30Z) - Multi-target stain normalization for histology slides [6.820595748010971]
We introduce a novel approach that leverages multiple reference images to enhance robustness against stain variation.
Our method is parameter-free and can be adopted in existing computational pathology pipelines with no significant changes.
arXiv Detail & Related papers (2024-06-04T07:57:34Z) - Control Color: Multimodal Diffusion-based Interactive Image Colorization [81.68817300796644]
Control Color (Ctrl Color) is a multi-modal colorization method that leverages the pre-trained Stable Diffusion (SD) model.
We present an effective way to encode user strokes to enable precise local color manipulation.
We also introduce a novel module based on self-attention and a content-guided deformable autoencoder to address the long-standing issues of color overflow and inaccurate coloring.
arXiv Detail & Related papers (2024-02-16T17:51:13Z) - DARC: Distribution-Aware Re-Coloring Model for Generalizable Nucleus
Segmentation [68.43628183890007]
We argue that domain gaps can also be caused by different foreground (nucleus)-background ratios.
First, we introduce a re-coloring method that relieves dramatic image color variations between different domains.
Second, we propose a new instance normalization method that is robust to the variation in the foreground-background ratios.
arXiv Detail & Related papers (2023-09-01T01:01:13Z) - ContriMix: Scalable stain color augmentation for domain generalization
without domain labels in digital pathology [7.649593612014923]
ContriMix is a free stain color augmentation method based on DRIT++, a style-transfer method.
It exploits sample stain color variation within a training minibatch and random mixing to extract content and attribute information from pathology images.
Its performance is consistent across different slides in the test set while being robust to the color variation from rare substances in pathology images.
arXiv Detail & Related papers (2023-06-07T15:36:26Z) - Stain-invariant self supervised learning for histopathology image
analysis [74.98663573628743]
We present a self-supervised algorithm for several classification tasks within hematoxylin and eosin stained images of breast cancer.
Our method achieves the state-of-the-art performance on several publicly available breast cancer datasets.
arXiv Detail & Related papers (2022-11-14T18:16:36Z) - PalGAN: Image Colorization with Palette Generative Adversarial Networks [51.59276436217957]
We propose a new GAN-based colorization approach PalGAN, integrated with palette estimation and chromatic attention.
PalGAN outperforms state-of-the-arts in quantitative evaluation and visual comparison, delivering notable diverse, contrastive, and edge-preserving appearances.
arXiv Detail & Related papers (2022-10-20T12:28:31Z) - Neural Color Operators for Sequential Image Retouching [62.99812889713773]
We propose a novel image retouching method by modeling the retouching process as performing a sequence of newly introduced trainable neural color operators.
The neural color operator mimics the behavior of traditional color operators and learns pixelwise color transformation while its strength is controlled by a scalar.
Our method consistently achieves the best results compared with SOTA methods in both quantitative measures and visual qualities.
arXiv Detail & Related papers (2022-07-17T05:33:19Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z) - Stain Style Transfer of Histopathology Images Via Structure-Preserved
Generative Learning [31.254432319814864]
This study proposes two stain style transfer models, SSIM-GAN and DSCSI-GAN, based on the generative adversarial networks.
By cooperating structural preservation metrics and feedback of an auxiliary diagnosis net in learning, medical-relevant information is preserved in color-normalized images.
arXiv Detail & Related papers (2020-07-24T15:30:19Z) - Bridging the gap between Natural and Medical Images through Deep
Colorization [15.585095421320922]
Transfer learning from natural image collections is a standard practice that attempts to tackle shape, texture and color discrepancies.
In this work, we propose to disentangle those challenges and design a dedicated network module that focuses on color adaptation.
We combine learning from scratch of the color module with transfer learning of different classification backbones, obtaining an end-to-end, easy-to-train architecture for diagnostic image recognition.
arXiv Detail & Related papers (2020-05-21T12:03:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.