Whole Slide Images based Cancer Survival Prediction using Attention
Guided Deep Multiple Instance Learning Networks
- URL: http://arxiv.org/abs/2009.11169v1
- Date: Wed, 23 Sep 2020 14:31:15 GMT
- Title: Whole Slide Images based Cancer Survival Prediction using Attention
Guided Deep Multiple Instance Learning Networks
- Authors: Jiawen Yao, Xinliang Zhu, Jitendra Jonnagaddala, Nicholas Hawkins,
Junzhou Huang
- Abstract summary: Current image-based survival models that limit to key patches or clusters derived from Whole Slide Images (WSIs)
We propose Deep Attention Multiple Instance Survival Learning (DeepAttnMISL) by introducing both siamese MI-FCN and attention-based MIL pooling.
We evaluated our methods on two large cancer whole slide images datasets and our results suggest that the proposed approach is more effective and suitable for large datasets.
- Score: 38.39901070720532
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional image-based survival prediction models rely on discriminative
patch labeling which make those methods not scalable to extend to large
datasets. Recent studies have shown Multiple Instance Learning (MIL) framework
is useful for histopathological images when no annotations are available in
classification task. Different to the current image-based survival models that
limit to key patches or clusters derived from Whole Slide Images (WSIs), we
propose Deep Attention Multiple Instance Survival Learning (DeepAttnMISL) by
introducing both siamese MI-FCN and attention-based MIL pooling to efficiently
learn imaging features from the WSI and then aggregate WSI-level information to
patient-level. Attention-based aggregation is more flexible and adaptive than
aggregation techniques in recent survival models. We evaluated our methods on
two large cancer whole slide images datasets and our results suggest that the
proposed approach is more effective and suitable for large datasets and has
better interpretability in locating important patterns and features that
contribute to accurate cancer survival predictions. The proposed framework can
also be used to assess individual patient's risk and thus assisting in
delivering personalized medicine. Codes are available at
https://github.com/uta-smile/DeepAttnMISL_MEDIA.
Related papers
- LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Active Learning Enhances Classification of Histopathology Whole Slide
Images with Attention-based Multiple Instance Learning [48.02011627390706]
We train an attention-based MIL and calculate a confidence metric for every image in the dataset to select the most uncertain WSIs for expert annotation.
With a novel attention guiding loss, this leads to an accuracy boost of the trained models with few regions annotated for each class.
It may in the future serve as an important contribution to train MIL models in the clinically relevant context of cancer classification in histopathology.
arXiv Detail & Related papers (2023-03-02T15:18:58Z) - AMIGO: Sparse Multi-Modal Graph Transformer with Shared-Context
Processing for Representation Learning of Giga-pixel Images [53.29794593104923]
We present a novel concept of shared-context processing for whole slide histopathology images.
AMIGO uses the celluar graph within the tissue to provide a single representation for a patient.
We show that our model is strongly robust to missing information to an extent that it can achieve the same performance with as low as 20% of the data.
arXiv Detail & Related papers (2023-03-01T23:37:45Z) - Hierarchical Transformer for Survival Prediction Using Multimodality
Whole Slide Images and Genomics [63.76637479503006]
Learning good representation of giga-pixel level whole slide pathology images (WSI) for downstream tasks is critical.
This paper proposes a hierarchical-based multimodal transformer framework that learns a hierarchical mapping between pathology images and corresponding genes.
Our architecture requires fewer GPU resources compared with benchmark methods while maintaining better WSI representation ability.
arXiv Detail & Related papers (2022-11-29T23:47:56Z) - Histopathology DatasetGAN: Synthesizing Large-Resolution Histopathology
Datasets [0.0]
Histopathology datasetGAN (HDGAN) is a framework for image generation and segmentation that scales well to large-resolution histopathology images.
We make several adaptations from the original framework, including updating the generative backbone, selectively extracting latent features from the generator, and switching to memory-mapped arrays.
We evaluate HDGAN on a thrombotic microangiopathy high-resolution tile dataset, demonstrating strong performance on the high-resolution image-annotation generation task.
arXiv Detail & Related papers (2022-07-06T14:33:50Z) - Intelligent Masking: Deep Q-Learning for Context Encoding in Medical
Image Analysis [48.02011627390706]
We develop a novel self-supervised approach that occludes targeted regions to improve the pre-training procedure.
We show that training the agent against the prediction model can significantly improve the semantic features extracted for downstream classification tasks.
arXiv Detail & Related papers (2022-03-25T19:05:06Z) - MHAttnSurv: Multi-Head Attention for Survival Prediction Using
Whole-Slide Pathology Images [4.148207298604488]
We developed a multi-head attention approach to focus on various parts of a tumor slide, for more comprehensive information extraction from WSIs.
Our model achieved an average c-index of 0.640, outperforming two existing state-of-the-art approaches for WSI-based survival prediction.
arXiv Detail & Related papers (2021-10-22T02:18:27Z) - Medical Image Harmonization Using Deep Learning Based Canonical Mapping:
Toward Robust and Generalizable Learning in Imaging [4.396671464565882]
We propose a new paradigm in which data from a diverse range of acquisition conditions are "harmonized" to a common reference domain.
We test this approach on two example problems, namely MRI-based brain age prediction and classification of schizophrenia.
arXiv Detail & Related papers (2020-10-11T22:01:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.