Adversarial learning of cancer tissue representations
- URL: http://arxiv.org/abs/2108.02223v1
- Date: Wed, 4 Aug 2021 18:00:47 GMT
- Title: Adversarial learning of cancer tissue representations
- Authors: Adalberto Claudio Quiros, Nicolas Coudray, Anna Yeaton, Wisuwat
Sunhem, Roderick Murray-Smith, Aristotelis Tsirigos, Ke Yuan
- Abstract summary: We present an adversarial learning model to extract feature representations of cancer tissue, without the need for manual annotations.
We show that these representations are able to identify a variety of morphological characteristics across three cancer types: Breast, colon, and lung.
Our results show that our model captures distinct phenotypic characteristics of real tissue samples, paving the way for further understanding of tumor progression and tumor micro-environment.
- Score: 6.395981404833557
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning based analysis of histopathology images shows promise in
advancing the understanding of tumor progression, tumor micro-environment, and
their underpinning biological processes. So far, these approaches have focused
on extracting information associated with annotations. In this work, we ask how
much information can be learned from the tissue architecture itself.
We present an adversarial learning model to extract feature representations
of cancer tissue, without the need for manual annotations. We show that these
representations are able to identify a variety of morphological characteristics
across three cancer types: Breast, colon, and lung. This is supported by 1) the
separation of morphologic characteristics in the latent space; 2) the ability
to classify tissue type with logistic regression using latent representations,
with an AUC of 0.97 and 85% accuracy, comparable to supervised deep models; 3)
the ability to predict the presence of tumor in Whole Slide Images (WSIs) using
multiple instance learning (MIL), achieving an AUC of 0.98 and 94% accuracy.
Our results show that our model captures distinct phenotypic characteristics
of real tissue samples, paving the way for further understanding of tumor
progression and tumor micro-environment, and ultimately refining
histopathological classification for diagnosis and treatment. The code and
pretrained models are available at:
https://github.com/AdalbertoCq/Adversarial-learning-of-cancer-tissue-representations
Related papers
- Tertiary Lymphoid Structures Generation through Graph-based Diffusion [54.37503714313661]
In this work, we leverage state-of-the-art graph-based diffusion models to generate biologically meaningful cell-graphs.
We show that the adopted graph diffusion model is able to accurately learn the distribution of cells in terms of their tertiary lymphoid structures (TLS) content.
arXiv Detail & Related papers (2023-10-10T14:37:17Z) - Active Learning Enhances Classification of Histopathology Whole Slide
Images with Attention-based Multiple Instance Learning [48.02011627390706]
We train an attention-based MIL and calculate a confidence metric for every image in the dataset to select the most uncertain WSIs for expert annotation.
With a novel attention guiding loss, this leads to an accuracy boost of the trained models with few regions annotated for each class.
It may in the future serve as an important contribution to train MIL models in the clinically relevant context of cancer classification in histopathology.
arXiv Detail & Related papers (2023-03-02T15:18:58Z) - Domain-specific transfer learning in the automated scoring of
tumor-stroma ratio from histopathological images of colorectal cancer [1.2264932946286657]
Tumor-stroma ratio (TSR) is a prognostic factor for many types of solid tumors.
The method is based on convolutional neural networks which were trained to classify colorectal cancer tissue.
arXiv Detail & Related papers (2022-12-30T12:27:27Z) - Deep Learning Generates Synthetic Cancer Histology for Explainability
and Education [37.13457398561086]
Conditional generative adversarial networks (cGANs) are AI models that generate synthetic images.
We describe the use of a cGAN for explaining models trained to classify molecularly-subtyped tumors.
We show that clear, intuitive cGAN visualizations can reinforce and improve human understanding of histologic manifestations of tumor biology.
arXiv Detail & Related papers (2022-11-12T00:14:57Z) - Mapping the landscape of histomorphological cancer phenotypes using
self-supervised learning on unlabeled, unannotated pathology slides [9.27127895781971]
Histomorphological Phenotype Learning operates via the automatic discovery of discriminatory image features in small image tiles.
Tiles are grouped into morphologically similar clusters which constitute a library of histomorphological phenotypes.
arXiv Detail & Related papers (2022-05-04T08:06:55Z) - SAG-GAN: Semi-Supervised Attention-Guided GANs for Data Augmentation on
Medical Images [47.35184075381965]
We present a data augmentation method for generating synthetic medical images using cycle-consistency Generative Adversarial Networks (GANs)
The proposed GANs-based model can generate a tumor image from a normal image, and in turn, it can also generate a normal image from a tumor image.
We train the classification model using real images with classic data augmentation methods and classification models using synthetic images.
arXiv Detail & Related papers (2020-11-15T14:01:24Z) - Spectral-Spatial Recurrent-Convolutional Networks for In-Vivo
Hyperspectral Tumor Type Classification [49.32653090178743]
We demonstrate the feasibility of in-vivo tumor type classification using hyperspectral imaging and deep learning.
Our best model achieves an AUC of 76.3%, significantly outperforming previous conventional and deep learning methods.
arXiv Detail & Related papers (2020-07-02T12:00:53Z) - Synthesizing lesions using contextual GANs improves breast cancer
classification on mammograms [0.4297070083645048]
We present a novel generative adversarial network (GAN) model for data augmentation that can realistically synthesize and remove lesions on mammograms.
With self-attention and semi-supervised learning components, the U-net-based architecture can generate high resolution (256x256px) outputs.
arXiv Detail & Related papers (2020-05-29T21:23:00Z) - Representation Learning of Histopathology Images using Graph Neural
Networks [12.427740549056288]
We propose a two-stage framework for WSI representation learning.
We sample relevant patches using a color-based method and use graph neural networks to learn relations among sampled patches to aggregate the image information into a single vector representation.
We demonstrate the performance of our approach for discriminating two sub-types of lung cancers, Lung Adenocarcinoma (LUAD) & Lung Squamous Cell Carcinoma (LUSC)
arXiv Detail & Related papers (2020-04-16T00:09:20Z) - An interpretable classifier for high-resolution breast cancer screening
images utilizing weakly supervised localization [45.00998416720726]
We propose a framework to address the unique properties of medical images.
This model first uses a low-capacity, yet memory-efficient, network on the whole image to identify the most informative regions.
It then applies another higher-capacity network to collect details from chosen regions.
Finally, it employs a fusion module that aggregates global and local information to make a final prediction.
arXiv Detail & Related papers (2020-02-13T15:28:42Z) - Stan: Small tumor-aware network for breast ultrasound image segmentation [68.8204255655161]
We propose a novel deep learning architecture called Small Tumor-Aware Network (STAN) to improve the performance of segmenting tumors with different size.
The proposed approach outperformed the state-of-the-art approaches in segmenting small breast tumors.
arXiv Detail & Related papers (2020-02-03T22:25:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.