ContriMix: Scalable stain color augmentation for domain generalization
without domain labels in digital pathology
- URL: http://arxiv.org/abs/2306.04527v4
- Date: Fri, 8 Mar 2024 17:28:47 GMT
- Title: ContriMix: Scalable stain color augmentation for domain generalization
without domain labels in digital pathology
- Authors: Tan H. Nguyen, Dinkar Juyal, Jin Li, Aaditya Prakash, Shima Nofallah,
Chintan Shah, Sai Chowdary Gullapally, Limin Yu, Michael Griffin, Anand
Sampat, John Abel, Justin Lee, Amaro Taylor-Weiner
- Abstract summary: ContriMix is a free stain color augmentation method based on DRIT++, a style-transfer method.
It exploits sample stain color variation within a training minibatch and random mixing to extract content and attribute information from pathology images.
Its performance is consistent across different slides in the test set while being robust to the color variation from rare substances in pathology images.
- Score: 7.649593612014923
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Differences in staining and imaging procedures can cause significant color
variations in histopathology images, leading to poor generalization when
deploying deep-learning models trained from a different data source. Various
color augmentation methods have been proposed to generate synthetic images
during training to make models more robust, eliminating the need for stain
normalization during test time. Many color augmentation methods leverage domain
labels to generate synthetic images. This approach causes three significant
challenges to scaling such a model. Firstly, incorporating data from a new
domain into deep-learning models trained on existing domain labels is not
straightforward. Secondly, dependency on domain labels prevents the use of
pathology images without domain labels to improve model performance. Finally,
implementation of these methods becomes complicated when multiple domain labels
(e.g., patient identification, medical center, etc) are associated with a
single image. We introduce ContriMix, a novel domain label free stain color
augmentation method based on DRIT++, a style-transfer method. Contrimix
leverages sample stain color variation within a training minibatch and random
mixing to extract content and attribute information from pathology images. This
information can be used by a trained ContriMix model to create synthetic images
to improve the performance of existing classifiers. ContriMix outperforms
competing methods on the Camelyon17-WILDS dataset. Its performance is
consistent across different slides in the test set while being robust to the
color variation from rare substances in pathology images. We make our code and
trained ContriMix models available for research use. The code for ContriMix can
be found at https://gitlab.com/huutan86/contrimix
Related papers
- Automatic Controllable Colorization via Imagination [55.489416987587305]
We propose a framework for automatic colorization that allows for iterative editing and modifications.
By understanding the content within a grayscale image, we utilize a pre-trained image generation model to generate multiple images that contain the same content.
These images serve as references for coloring, mimicking the process of human experts.
arXiv Detail & Related papers (2024-04-08T16:46:07Z) - DiffuseMix: Label-Preserving Data Augmentation with Diffusion Models [18.44432223381586]
Recently, a number of image-mixing-based augmentation techniques have been introduced to improve the generalization of deep neural networks.
In these techniques, two or more randomly selected natural images are mixed together to generate an augmented image.
We propose DiffuseMix, a novel data augmentation technique that leverages a diffusion model to reshape training images.
arXiv Detail & Related papers (2024-04-05T05:31:02Z) - Control Color: Multimodal Diffusion-based Interactive Image Colorization [81.68817300796644]
Control Color (Ctrl Color) is a multi-modal colorization method that leverages the pre-trained Stable Diffusion (SD) model.
We present an effective way to encode user strokes to enable precise local color manipulation.
We also introduce a novel module based on self-attention and a content-guided deformable autoencoder to address the long-standing issues of color overflow and inaccurate coloring.
arXiv Detail & Related papers (2024-02-16T17:51:13Z) - SynCDR : Training Cross Domain Retrieval Models with Synthetic Data [69.26882668598587]
In cross-domain retrieval, a model is required to identify images from the same semantic category across two visual domains.
We show how to generate synthetic data to fill in these missing category examples across domains.
Our best SynCDR model can outperform prior art by up to 15%.
arXiv Detail & Related papers (2023-12-31T08:06:53Z) - DARC: Distribution-Aware Re-Coloring Model for Generalizable Nucleus
Segmentation [68.43628183890007]
We argue that domain gaps can also be caused by different foreground (nucleus)-background ratios.
First, we introduce a re-coloring method that relieves dramatic image color variations between different domains.
Second, we propose a new instance normalization method that is robust to the variation in the foreground-background ratios.
arXiv Detail & Related papers (2023-09-01T01:01:13Z) - Detecting Recolored Image by Spatial Correlation [60.08643417333974]
Image recoloring is an emerging editing technique that can manipulate the color values of an image to give it a new style.
In this paper, we explore a solution from the perspective of the spatial correlation, which exhibits the generic detection capability for both conventional and deep learning-based recoloring.
Our method achieves the state-of-the-art detection accuracy on multiple benchmark datasets and exhibits well generalization for unknown types of recoloring methods.
arXiv Detail & Related papers (2022-04-23T01:54:06Z) - Structure-Preserving Multi-Domain Stain Color Augmentation using
Style-Transfer with Disentangled Representations [0.9051352746190446]
HistAuGAN can simulate a wide variety of realistic histology stain colors, thus making neural networks stain-invariant when applied during training.
Based on a generative adversarial network (GAN) for image-to-image translation, our model disentangles the content of the image, i.e., the morphological tissue structure, from the stain color attributes.
It can be trained on multiple domains and, therefore, learns to cover different stain colors as well as other domain-specific variations introduced in the slide preparation and imaging process.
arXiv Detail & Related papers (2021-07-26T17:52:39Z) - Instance-aware Image Colorization [51.12040118366072]
In this paper, we propose a method for achieving instance-aware colorization.
Our network architecture leverages an off-the-shelf object detector to obtain cropped object images.
We use a similar network to extract the full-image features and apply a fusion module to predict the final colors.
arXiv Detail & Related papers (2020-05-21T17:59:23Z) - Bridging the gap between Natural and Medical Images through Deep
Colorization [15.585095421320922]
Transfer learning from natural image collections is a standard practice that attempts to tackle shape, texture and color discrepancies.
In this work, we propose to disentangle those challenges and design a dedicated network module that focuses on color adaptation.
We combine learning from scratch of the color module with transfer learning of different classification backbones, obtaining an end-to-end, easy-to-train architecture for diagnostic image recognition.
arXiv Detail & Related papers (2020-05-21T12:03:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.