MHAttnSurv: Multi-Head Attention for Survival Prediction Using
Whole-Slide Pathology Images
- URL: http://arxiv.org/abs/2110.11558v1
- Date: Fri, 22 Oct 2021 02:18:27 GMT
- Title: MHAttnSurv: Multi-Head Attention for Survival Prediction Using
Whole-Slide Pathology Images
- Authors: Shuai Jiang, Arief A. Suriawinata, Saeed Hassanpour
- Abstract summary: We developed a multi-head attention approach to focus on various parts of a tumor slide, for more comprehensive information extraction from WSIs.
Our model achieved an average c-index of 0.640, outperforming two existing state-of-the-art approaches for WSI-based survival prediction.
- Score: 4.148207298604488
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In pathology, whole-slide images (WSI) based survival prediction has
attracted increasing interest. However, given the large size of WSIs and the
lack of pathologist annotations, extracting the prognostic information from
WSIs remains a challenging task. Previous studies have used multiple instance
learning approaches to combine the information from multiple randomly sampled
patches, but different visual patterns may contribute differently to prognosis
prediction. In this study, we developed a multi-head attention approach to
focus on various parts of a tumor slide, for more comprehensive information
extraction from WSIs. We evaluated our approach on four cancer types from The
Cancer Genome Atlas database. Our model achieved an average c-index of 0.640,
outperforming two existing state-of-the-art approaches for WSI-based survival
prediction, which have an average c-index of 0.603 and 0.619 on these datasets.
Visualization of our attention maps reveals each attention head focuses
synergistically on different morphological patterns.
Related papers
- Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images [68.42215385041114]
This paper introduces a novel lightweight multi-level adaptation and comparison framework to repurpose the CLIP model for medical anomaly detection.
Our approach integrates multiple residual adapters into the pre-trained visual encoder, enabling a stepwise enhancement of visual features across different levels.
Our experiments on medical anomaly detection benchmarks demonstrate that our method significantly surpasses current state-of-the-art models.
arXiv Detail & Related papers (2024-03-19T09:28:19Z) - Active Learning Enhances Classification of Histopathology Whole Slide
Images with Attention-based Multiple Instance Learning [48.02011627390706]
We train an attention-based MIL and calculate a confidence metric for every image in the dataset to select the most uncertain WSIs for expert annotation.
With a novel attention guiding loss, this leads to an accuracy boost of the trained models with few regions annotated for each class.
It may in the future serve as an important contribution to train MIL models in the clinically relevant context of cancer classification in histopathology.
arXiv Detail & Related papers (2023-03-02T15:18:58Z) - AMIGO: Sparse Multi-Modal Graph Transformer with Shared-Context
Processing for Representation Learning of Giga-pixel Images [53.29794593104923]
We present a novel concept of shared-context processing for whole slide histopathology images.
AMIGO uses the celluar graph within the tissue to provide a single representation for a patient.
We show that our model is strongly robust to missing information to an extent that it can achieve the same performance with as low as 20% of the data.
arXiv Detail & Related papers (2023-03-01T23:37:45Z) - Hierarchical Transformer for Survival Prediction Using Multimodality
Whole Slide Images and Genomics [63.76637479503006]
Learning good representation of giga-pixel level whole slide pathology images (WSI) for downstream tasks is critical.
This paper proposes a hierarchical-based multimodal transformer framework that learns a hierarchical mapping between pathology images and corresponding genes.
Our architecture requires fewer GPU resources compared with benchmark methods while maintaining better WSI representation ability.
arXiv Detail & Related papers (2022-11-29T23:47:56Z) - End-to-end Learning for Image-based Detection of Molecular Alterations
in Digital Pathology [1.916179040410189]
Current approaches for classification of whole slide images (WSI) in digital pathology predominantly utilize a two-stage learning pipeline.
A major drawback of such approaches is the requirement for task-specific auxiliary labels which are not acquired in clinical routine.
We propose a novel learning pipeline for WSI classification that is trainable end-to-end and does not require any auxiliary annotations.
arXiv Detail & Related papers (2022-06-30T20:30:33Z) - Colorectal cancer survival prediction using deep distribution based
multiple-instance learning [5.231498575799198]
We develop a distribution based multiple-instance survival learning algorithm (DeepDisMISL)
Our results suggest that the more information about the distribution of the patch scores for a WSI, the better is the prediction performance.
DeepDisMISL demonstrated superior predictive ability compared to other recently published, state-of-the-art algorithms.
arXiv Detail & Related papers (2022-04-24T14:55:57Z) - Multi-task fusion for improving mammography screening data
classification [3.7683182861690843]
We propose a pipeline approach, where we first train a set of individual, task-specific models.
We then investigate the fusion thereof, which is in contrast to the standard model ensembling strategy.
Our fusion approaches improve AUC scores significantly by up to 0.04 compared to standard model ensembling.
arXiv Detail & Related papers (2021-12-01T13:56:27Z) - Whole Slide Images based Cancer Survival Prediction using Attention
Guided Deep Multiple Instance Learning Networks [38.39901070720532]
Current image-based survival models that limit to key patches or clusters derived from Whole Slide Images (WSIs)
We propose Deep Attention Multiple Instance Survival Learning (DeepAttnMISL) by introducing both siamese MI-FCN and attention-based MIL pooling.
We evaluated our methods on two large cancer whole slide images datasets and our results suggest that the proposed approach is more effective and suitable for large datasets.
arXiv Detail & Related papers (2020-09-23T14:31:15Z) - Modeling Shared Responses in Neuroimaging Studies through MultiView ICA [94.31804763196116]
Group studies involving large cohorts of subjects are important to draw general conclusions about brain functional organization.
We propose a novel MultiView Independent Component Analysis model for group studies, where data from each subject are modeled as a linear combination of shared independent sources plus noise.
We demonstrate the usefulness of our approach first on fMRI data, where our model demonstrates improved sensitivity in identifying common sources among subjects.
arXiv Detail & Related papers (2020-06-11T17:29:53Z) - Weakly supervised multiple instance learning histopathological tumor
segmentation [51.085268272912415]
We propose a weakly supervised framework for whole slide imaging segmentation.
We exploit a multiple instance learning scheme for training models.
The proposed framework has been evaluated on multi-locations and multi-centric public data from The Cancer Genome Atlas and the PatchCamelyon dataset.
arXiv Detail & Related papers (2020-04-10T13:12:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.