Stain Isolation-based Guidance for Improved Stain Translation
- URL: http://arxiv.org/abs/2207.00431v1
- Date: Tue, 28 Jun 2022 13:33:48 GMT
- Title: Stain Isolation-based Guidance for Improved Stain Translation
- Authors: Nicolas Brieu, Felix J. Segerer, Ansh Kapil, Philipp Wortmann, Guenter
Schmidt
- Abstract summary: CycleGAN is state of the art for the stain translation of histopathology images.
It often suffers from the presence of cycle-consistent but non structure-preserving errors.
We propose an alternative approach to the set of methods which, relying on segmentation consistency, enable the preservation of pathology structures.
- Score: 0.5249805590164902
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Unsupervised and unpaired domain translation using generative adversarial
neural networks, and more precisely CycleGAN, is state of the art for the stain
translation of histopathology images. It often, however, suffers from the
presence of cycle-consistent but non structure-preserving errors. We propose an
alternative approach to the set of methods which, relying on segmentation
consistency, enable the preservation of pathology structures. Focusing on
immunohistochemistry (IHC) and multiplexed immunofluorescence (mIF), we
introduce a simple yet effective guidance scheme as a loss function that
leverages the consistency of stain translation with stain isolation.
Qualitative and quantitative experiments show the ability of the proposed
approach to improve translation between the two domains.
Related papers
- Cascaded Multi-path Shortcut Diffusion Model for Medical Image Translation [26.67518950976257]
We propose a Cascade Multi-path Shortcut Diffusion Model (CMDM) for high-quality medical image translation and uncertainty estimation.
Our experimental results found that CMDM can produce high-quality translations comparable to state-of-the-art methods.
arXiv Detail & Related papers (2024-04-06T03:02:47Z) - Auxiliary CycleGAN-guidance for Task-Aware Domain Translation from Duplex to Monoplex IHC Images [0.3769303106863454]
Cycle Generative Adversarial Networks (GANs) are well established but associated cycle consistency constrain relies on that an invertible mapping exists between the two domains.
We propose - through the introduction of a novel training design - an alternative constrain leveraging a set of immunofluorescence (IF) images as an auxiliary unpaired image domain.
arXiv Detail & Related papers (2024-03-12T07:57:33Z) - A Strictly Bounded Deep Network for Unpaired Cyclic Translation of
Medical Images [0.5120567378386615]
We consider unpaired medical images and provide a strictly bounded network that yields a stable bidirectional translation.
We propose a patch-level exploitd cyclic conditional adversarial network (pCCGAN) embedded with adaptive dictionary learning.
arXiv Detail & Related papers (2023-11-04T18:43:31Z) - TransFool: An Adversarial Attack against Neural Machine Translation
Models [49.50163349643615]
We investigate the vulnerability of Neural Machine Translation (NMT) models to adversarial attacks and propose a new attack algorithm called TransFool.
We generate fluent adversarial examples in the source language that maintain a high level of semantic similarity with the clean samples.
Based on automatic and human evaluations, TransFool leads to improvement in terms of success rate, semantic similarity, and fluency compared to the existing attacks.
arXiv Detail & Related papers (2023-02-02T08:35:34Z) - Stain based contrastive co-training for histopathological image analysis [61.87751502143719]
We propose a novel semi-supervised learning approach for classification of histovolution images.
We employ strong supervision with patch-level annotations combined with a novel co-training loss to create a semi-supervised learning framework.
We evaluate our approach in clear cell renal cell and prostate carcinomas, and demonstrate improvement over state-of-the-art semi-supervised learning methods.
arXiv Detail & Related papers (2022-06-24T22:25:31Z) - Improving Unsupervised Stain-To-Stain Translation using Self-Supervision
and Meta-Learning [4.32671477389424]
Unsupervised domain adaptation based on image-to-image translation is gaining importance in digital pathology.
We tackle the variation of different histological stains by unsupervised stain-to-stain translation.
We use CycleGANs for stain-to-stain translation in kidney histopathology.
arXiv Detail & Related papers (2021-12-16T12:42:40Z) - Modelling Latent Translations for Cross-Lingual Transfer [47.61502999819699]
We propose a new technique that integrates both steps of the traditional pipeline (translation and classification) into a single model.
We evaluate our novel latent translation-based model on a series of multilingual NLU tasks.
We report gains for both zero-shot and few-shot learning setups, up to 2.7 accuracy points on average.
arXiv Detail & Related papers (2021-07-23T17:11:27Z) - Self adversarial attack as an augmentation method for
immunohistochemical stainings [0.7340845393655052]
We demonstrate that, when applied to histopathology data, this hidden noise appears to be related to stain specific features.
By perturbing this hidden information, the translation models produce different, plausible outputs.
arXiv Detail & Related papers (2021-03-21T10:48:40Z) - Weakly-Supervised Cross-Domain Adaptation for Endoscopic Lesions
Segmentation [79.58311369297635]
We propose a new weakly-supervised lesions transfer framework, which can explore transferable domain-invariant knowledge across different datasets.
A Wasserstein quantified transferability framework is developed to highlight widerange transferable contextual dependencies.
A novel self-supervised pseudo label generator is designed to equally provide confident pseudo pixel labels for both hard-to-transfer and easy-to-transfer target samples.
arXiv Detail & Related papers (2020-12-08T02:26:03Z) - On Long-Tailed Phenomena in Neural Machine Translation [50.65273145888896]
State-of-the-art Neural Machine Translation (NMT) models struggle with generating low-frequency tokens.
We propose a new loss function, the Anti-Focal loss, to better adapt model training to the structural dependencies of conditional text generation.
We show the efficacy of the proposed technique on a number of Machine Translation (MT) datasets, demonstrating that it leads to significant gains over cross-entropy.
arXiv Detail & Related papers (2020-10-10T07:00:57Z) - Discrete Variational Attention Models for Language Generation [51.88612022940496]
We propose a discrete variational attention model with categorical distribution over the attention mechanism owing to the discrete nature in languages.
Thanks to the property of discreteness, the training of our proposed approach does not suffer from posterior collapse.
arXiv Detail & Related papers (2020-04-21T05:49:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.