Region-guided CycleGANs for Stain Transfer in Whole Slide Images
- URL: http://arxiv.org/abs/2208.12847v1
- Date: Fri, 26 Aug 2022 19:12:49 GMT
- Title: Region-guided CycleGANs for Stain Transfer in Whole Slide Images
- Authors: Joseph Boyd, Ir\`ene Villa, Marie-Christine Mathieu, Eric Deutsch,
Nikos Paragios, Maria Vakalopoulou, Stergios Christodoulidis
- Abstract summary: We propose an extension to CycleGANs in the form of a region of interest discriminator.
We present a use case on whole slide images, where an IHC stain provides an experimentally generated signal for metastatic cells.
- Score: 6.704730171977661
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In whole slide imaging, commonly used staining techniques based on
hematoxylin and eosin (H&E) and immunohistochemistry (IHC) stains accentuate
different aspects of the tissue landscape. In the case of detecting metastases,
IHC provides a distinct readout that is readily interpretable by pathologists.
IHC, however, is a more expensive approach and not available at all medical
centers. Virtually generating IHC images from H&E using deep neural networks
thus becomes an attractive alternative. Deep generative models such as
CycleGANs learn a semantically-consistent mapping between two image domains,
while emulating the textural properties of each domain. They are therefore a
suitable choice for stain transfer applications. However, they remain fully
unsupervised, and possess no mechanism for enforcing biological consistency in
stain transfer. In this paper, we propose an extension to CycleGANs in the form
of a region of interest discriminator. This allows the CycleGAN to learn from
unpaired datasets where, in addition, there is a partial annotation of objects
for which one wishes to enforce consistency. We present a use case on whole
slide images, where an IHC stain provides an experimentally generated signal
for metastatic cells. We demonstrate the superiority of our approach over prior
art in stain transfer on histopathology tiles over two datasets. Our code and
model are available at https://github.com/jcboyd/miccai2022-roigan.
Related papers
- StainDiffuser: MultiTask Dual Diffusion Model for Virtual Staining [1.9029890402585894]
Hematoxylin and Eosin (H&E) staining is the most commonly used for disease diagnosis and tumor recurrence tracking.
Deep learning models have made Image-to-Image (I2I) translation a key research area, reducing the need for expensive physical staining processes.
We propose StainDiffuser, a novel dual diffusion architecture for virtual staining that converges under a limited training budget.
arXiv Detail & Related papers (2024-03-17T20:47:52Z) - DARC: Distribution-Aware Re-Coloring Model for Generalizable Nucleus
Segmentation [68.43628183890007]
We argue that domain gaps can also be caused by different foreground (nucleus)-background ratios.
First, we introduce a re-coloring method that relieves dramatic image color variations between different domains.
Second, we propose a new instance normalization method that is robust to the variation in the foreground-background ratios.
arXiv Detail & Related papers (2023-09-01T01:01:13Z) - Structural Cycle GAN for Virtual Immunohistochemistry Staining of Gland
Markers in the Colon [1.741980945827445]
Hematoxylin and Eosin (H&E) staining is one of the most frequently used stains for disease analysis, diagnosis, and grading.
Pathologists do need differentchemical (IHC) stains to analyze specific structures or cells.
Hematoxylin and Eosin (H&E) staining is one of the most frequently used stains for disease analysis, diagnosis, and grading.
arXiv Detail & Related papers (2023-08-25T05:24:23Z) - Cross-modulated Few-shot Image Generation for Colorectal Tissue
Classification [58.147396879490124]
Our few-shot generation method, named XM-GAN, takes one base and a pair of reference tissue images as input and generates high-quality yet diverse images.
To the best of our knowledge, we are the first to investigate few-shot generation in colorectal tissue images.
arXiv Detail & Related papers (2023-04-04T17:50:30Z) - Stain-invariant self supervised learning for histopathology image
analysis [74.98663573628743]
We present a self-supervised algorithm for several classification tasks within hematoxylin and eosin stained images of breast cancer.
Our method achieves the state-of-the-art performance on several publicly available breast cancer datasets.
arXiv Detail & Related papers (2022-11-14T18:16:36Z) - HistoStarGAN: A Unified Approach to Stain Normalisation, Stain Transfer
and Stain Invariant Segmentation in Renal Histopathology [0.5505634045241288]
HistoStarGAN is a unified framework that performs stain transfer between multiple stainings.
It can serve as a synthetic data generator, which paves the way for the use of fully annotated synthetic image data.
arXiv Detail & Related papers (2022-10-18T12:22:26Z) - Virtual stain transfer in histology via cascaded deep neural networks [2.309018557701645]
We demonstrate a virtual stain transfer framework via a cascaded deep neural network (C-DNN)
Unlike a single neural network structure which only takes one stain type as input to digitally output images of another stain type, C-DNN first uses virtual staining to transform autofluorescence microscopy images into H&E.
We successfully transferred the H&E-stained tissue images into virtual PAS (periodic acid-Schiff) stain.
arXiv Detail & Related papers (2022-07-14T00:43:18Z) - Stain based contrastive co-training for histopathological image analysis [61.87751502143719]
We propose a novel semi-supervised learning approach for classification of histovolution images.
We employ strong supervision with patch-level annotations combined with a novel co-training loss to create a semi-supervised learning framework.
We evaluate our approach in clear cell renal cell and prostate carcinomas, and demonstrate improvement over state-of-the-art semi-supervised learning methods.
arXiv Detail & Related papers (2022-06-24T22:25:31Z) - Cross-Modal Contrastive Learning for Abnormality Classification and
Localization in Chest X-rays with Radiomics using a Feedback Loop [63.81818077092879]
We propose an end-to-end semi-supervised cross-modal contrastive learning framework for medical images.
We first apply an image encoder to classify the chest X-rays and to generate the image features.
The radiomic features are then passed through another dedicated encoder to act as the positive sample for the image features generated from the same chest X-ray.
arXiv Detail & Related papers (2021-04-11T09:16:29Z) - What Can Be Transferred: Unsupervised Domain Adaptation for Endoscopic
Lesions Segmentation [51.7837386041158]
We develop a new unsupervised semantic transfer model including two complementary modules for endoscopic lesions segmentation.
Specifically, T_D focuses on where to translate transferable visual information of medical lesions via residual transferability-aware bottleneck.
T_F highlights how to augment transferable semantic features of various lesions and automatically ignore untransferable representations.
arXiv Detail & Related papers (2020-04-24T00:57:05Z) - Pix2Pix-based Stain-to-Stain Translation: A Solution for Robust Stain
Normalization in Histopathology Images Analysis [5.33024001730262]
Stain-to-Stain Translation (STST) is used to stain normalization for Hematoxylin and Eosin stained histopathology images.
We perform the process of translation based on the pix2pix framework, which uses the conditional generator adversarial networks (cGANs)
arXiv Detail & Related papers (2020-02-03T11:19:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.