IMPaSh: A Novel Domain-shift Resistant Representation for Colorectal
Cancer Tissue Classification
- URL: http://arxiv.org/abs/2208.11052v1
- Date: Tue, 23 Aug 2022 15:59:08 GMT
- Title: IMPaSh: A Novel Domain-shift Resistant Representation for Colorectal
Cancer Tissue Classification
- Authors: Trinh Thi Le Vuong, Quoc Dang Vu, Mostafa Jahanifar, Simon Graham, Jin
Tae Kwak, Nasir Rajpoot
- Abstract summary: We propose a new augmentation called PatchShuffling and a novel self-supervised contrastive learning framework named IMPaSh for pre-training deep learning models.
We show that the proposed method outperforms other traditional histology domain-adaptation and state-of-the-art self-supervised learning methods.
- Score: 5.017246733091823
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The appearance of histopathology images depends on tissue type, staining and
digitization procedure. These vary from source to source and are the potential
causes for domain-shift problems. Owing to this problem, despite the great
success of deep learning models in computational pathology, a model trained on
a specific domain may still perform sub-optimally when we apply them to another
domain. To overcome this, we propose a new augmentation called PatchShuffling
and a novel self-supervised contrastive learning framework named IMPaSh for
pre-training deep learning models. Using these, we obtained a ResNet50 encoder
that can extract image representation resistant to domain-shift. We compared
our derived representation against those acquired based on other
domain-generalization techniques by using them for the cross-domain
classification of colorectal tissue images. We show that the proposed method
outperforms other traditional histology domain-adaptation and state-of-the-art
self-supervised learning methods. Code is available at:
https://github.com/trinhvg/IMPash .
Related papers
- Stain-Invariant Representation for Tissue Classification in Histology Images [1.1624569521079424]
We propose a framework that generates stain-augmented versions of the training images using stain perturbation matrix.
We evaluate the performance of the proposed model on cross-domain multi-class tissue type classification of colorectal cancer images.
arXiv Detail & Related papers (2024-11-21T23:50:30Z) - Unsupervised Domain Transfer with Conditional Invertible Neural Networks [83.90291882730925]
We propose a domain transfer approach based on conditional invertible neural networks (cINNs)
Our method inherently guarantees cycle consistency through its invertible architecture, and network training can efficiently be conducted with maximum likelihood.
Our method enables the generation of realistic spectral data and outperforms the state of the art on two downstream classification tasks.
arXiv Detail & Related papers (2023-03-17T18:00:27Z) - A domain adaptive deep learning solution for scanpath prediction of
paintings [66.46953851227454]
This paper focuses on the eye-movement analysis of viewers during the visual experience of a certain number of paintings.
We introduce a new approach to predicting human visual attention, which impacts several cognitive functions for humans.
The proposed new architecture ingests images and returns scanpaths, a sequence of points featuring a high likelihood of catching viewers' attention.
arXiv Detail & Related papers (2022-09-22T22:27:08Z) - Unsupervised Domain Adaptation Using Feature Disentanglement And GCNs
For Medical Image Classification [5.6512908295414]
We propose an unsupervised domain adaptation approach that uses graph neural networks and, disentangled semantic and domain invariant structural features.
We test the proposed method for classification on two challenging medical image datasets with distribution shifts.
Experiments show our method achieves state-of-the-art results compared to other domain adaptation methods.
arXiv Detail & Related papers (2022-06-27T09:02:16Z) - From Modern CNNs to Vision Transformers: Assessing the Performance,
Robustness, and Classification Strategies of Deep Learning Models in
Histopathology [1.8947504307591034]
We develop a new methodology to extensively evaluate a wide range of classification models.
We thoroughly tested the models on five widely used histopathology datasets.
We extend existing interpretability methods and systematically reveal insights of the models' classifications strategies.
arXiv Detail & Related papers (2022-04-11T12:26:19Z) - Beyond ImageNet Attack: Towards Crafting Adversarial Examples for
Black-box Domains [80.11169390071869]
Adversarial examples have posed a severe threat to deep neural networks due to their transferable nature.
We propose a Beyond ImageNet Attack (BIA) to investigate the transferability towards black-box domains.
Our methods outperform state-of-the-art approaches by up to 7.71% (towards coarse-grained domains) and 25.91% (towards fine-grained domains) on average.
arXiv Detail & Related papers (2022-01-27T14:04:27Z) - Self-Supervised Domain Adaptation for Diabetic Retinopathy Grading using
Vessel Image Reconstruction [61.58601145792065]
We learn invariant target-domain features by defining a novel self-supervised task based on retinal vessel image reconstructions.
It can be shown that our approach outperforms existing domain strategies.
arXiv Detail & Related papers (2021-07-20T09:44:07Z) - Learning domain-agnostic visual representation for computational
pathology using medically-irrelevant style transfer augmentation [4.538771844947821]
STRAP (Style TRansfer Augmentation for histoPathology) is a form of data augmentation based on random style transfer from artistic paintings.
Style transfer replaces the low-level texture content of images with the uninformative style of randomly selected artistic paintings.
We demonstrate that STRAP leads to state-of-the-art performance, particularly in the presence of domain shifts.
arXiv Detail & Related papers (2021-02-02T18:50:16Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Continual Adaptation of Visual Representations via Domain Randomization
and Meta-learning [21.50683576864347]
Most standard learning approaches lead to fragile models which are prone to drift when sequentially trained on samples of a different nature.
We show that one way to learn models that are inherently more robust against forgetting is domain randomization.
We devise a meta-learning strategy where a regularizer explicitly penalizes any loss associated with transferring the model from the current domain to different "auxiliary" meta-domains.
arXiv Detail & Related papers (2020-12-08T09:54:51Z) - CrDoCo: Pixel-level Domain Transfer with Cross-Domain Consistency [119.45667331836583]
Unsupervised domain adaptation algorithms aim to transfer the knowledge learned from one domain to another.
We present a novel pixel-wise adversarial domain adaptation algorithm.
arXiv Detail & Related papers (2020-01-09T19:00:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.