Learning domain-agnostic visual representation for computational
pathology using medically-irrelevant style transfer augmentation
- URL: http://arxiv.org/abs/2102.01678v1
- Date: Tue, 2 Feb 2021 18:50:16 GMT
- Title: Learning domain-agnostic visual representation for computational
pathology using medically-irrelevant style transfer augmentation
- Authors: Rikiya Yamashita, Jin Long, Snikitha Banda, Jeanne Shen, Daniel L.
Rubin
- Abstract summary: STRAP (Style TRansfer Augmentation for histoPathology) is a form of data augmentation based on random style transfer from artistic paintings.
Style transfer replaces the low-level texture content of images with the uninformative style of randomly selected artistic paintings.
We demonstrate that STRAP leads to state-of-the-art performance, particularly in the presence of domain shifts.
- Score: 4.538771844947821
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Suboptimal generalization of machine learning models on unseen data is a key
challenge which hampers the clinical applicability of such models to medical
imaging. Although various methods such as domain adaptation and domain
generalization have evolved to combat this challenge, learning robust and
generalizable representations is core to medical image understanding, and
continues to be a problem. Here, we propose STRAP (Style TRansfer Augmentation
for histoPathology), a form of data augmentation based on random style transfer
from artistic paintings, for learning domain-agnostic visual representations in
computational pathology. Style transfer replaces the low-level texture content
of images with the uninformative style of randomly selected artistic paintings,
while preserving high-level semantic content. This improves robustness to
domain shift and can be used as a simple yet powerful tool for learning
domain-agnostic representations. We demonstrate that STRAP leads to
state-of-the-art performance, particularly in the presence of domain shifts, on
a particular classification task of predicting microsatellite status in
colorectal cancer using digitized histopathology images.
Related papers
- MLIP: Enhancing Medical Visual Representation with Divergence Encoder
and Knowledge-guided Contrastive Learning [48.97640824497327]
We propose a novel framework leveraging domain-specific medical knowledge as guiding signals to integrate language information into the visual domain through image-text contrastive learning.
Our model includes global contrastive learning with our designed divergence encoder, local token-knowledge-patch alignment contrastive learning, and knowledge-guided category-level contrastive learning with expert knowledge.
Notably, MLIP surpasses state-of-the-art methods even with limited annotated data, highlighting the potential of multimodal pre-training in advancing medical representation learning.
arXiv Detail & Related papers (2024-02-03T05:48:50Z) - Learned representation-guided diffusion models for large-image generation [58.192263311786824]
We introduce a novel approach that trains diffusion models conditioned on embeddings from self-supervised learning (SSL)
Our diffusion models successfully project these features back to high-quality histopathology and remote sensing images.
Augmenting real data by generating variations of real images improves downstream accuracy for patch-level and larger, image-scale classification tasks.
arXiv Detail & Related papers (2023-12-12T14:45:45Z) - Histopathological Image Analysis with Style-Augmented Feature Domain
Mixing for Improved Generalization [14.797708873795406]
Domain generalization aims to address limitations by enabling the learning models to generalize to new datasets or populations.
Style transfer-based data augmentation is an emerging technique that can be used to improve the generalizability of machine learning models.
We propose a feature domain style mixing technique that uses adaptive instance normalization to generate style-augmented versions of images.
arXiv Detail & Related papers (2023-10-31T17:06:36Z) - Unsupervised Domain Transfer with Conditional Invertible Neural Networks [83.90291882730925]
We propose a domain transfer approach based on conditional invertible neural networks (cINNs)
Our method inherently guarantees cycle consistency through its invertible architecture, and network training can efficiently be conducted with maximum likelihood.
Our method enables the generation of realistic spectral data and outperforms the state of the art on two downstream classification tasks.
arXiv Detail & Related papers (2023-03-17T18:00:27Z) - A domain adaptive deep learning solution for scanpath prediction of
paintings [66.46953851227454]
This paper focuses on the eye-movement analysis of viewers during the visual experience of a certain number of paintings.
We introduce a new approach to predicting human visual attention, which impacts several cognitive functions for humans.
The proposed new architecture ingests images and returns scanpaths, a sequence of points featuring a high likelihood of catching viewers' attention.
arXiv Detail & Related papers (2022-09-22T22:27:08Z) - IMPaSh: A Novel Domain-shift Resistant Representation for Colorectal
Cancer Tissue Classification [5.017246733091823]
We propose a new augmentation called PatchShuffling and a novel self-supervised contrastive learning framework named IMPaSh for pre-training deep learning models.
We show that the proposed method outperforms other traditional histology domain-adaptation and state-of-the-art self-supervised learning methods.
arXiv Detail & Related papers (2022-08-23T15:59:08Z) - Self-Supervised Vision Transformers Learn Visual Concepts in
Histopathology [5.164102666113966]
We conduct a search for good representations in pathology by training a variety of self-supervised models with validation on a variety of weakly-supervised and patch-level tasks.
Our key finding is in discovering that Vision Transformers using DINO-based knowledge distillation are able to learn data-efficient and interpretable features in histology images.
arXiv Detail & Related papers (2022-03-01T16:14:41Z) - Self-Supervised Domain Adaptation for Diabetic Retinopathy Grading using
Vessel Image Reconstruction [61.58601145792065]
We learn invariant target-domain features by defining a novel self-supervised task based on retinal vessel image reconstructions.
It can be shown that our approach outperforms existing domain strategies.
arXiv Detail & Related papers (2021-07-20T09:44:07Z) - A Survey on Graph-Based Deep Learning for Computational Histopathology [36.58189530598098]
We have witnessed a rapid expansion of the use of machine learning and deep learning for the analysis of digital pathology and biopsy image patches.
Traditional learning over patch-wise features using convolutional neural networks limits the model when attempting to capture global contextual information.
We provide a conceptual grounding of graph-based deep learning and discuss its current success for tumor localization and classification, tumor invasion and staging, image retrieval, and survival prediction.
arXiv Detail & Related papers (2021-07-01T07:50:35Z) - Domain Shift in Computer Vision models for MRI data analysis: An
Overview [64.69150970967524]
Machine learning and computer vision methods are showing good performance in medical imagery analysis.
Yet only a few applications are now in clinical use.
Poor transferability of themodels to data from different sources or acquisition domains is one of the reasons for that.
arXiv Detail & Related papers (2020-10-14T16:34:21Z) - Stain Style Transfer of Histopathology Images Via Structure-Preserved
Generative Learning [31.254432319814864]
This study proposes two stain style transfer models, SSIM-GAN and DSCSI-GAN, based on the generative adversarial networks.
By cooperating structural preservation metrics and feedback of an auxiliary diagnosis net in learning, medical-relevant information is preserved in color-normalized images.
arXiv Detail & Related papers (2020-07-24T15:30:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.