Colorectal cancer survival prediction using deep distribution based
multiple-instance learning
- URL: http://arxiv.org/abs/2204.11294v1
- Date: Sun, 24 Apr 2022 14:55:57 GMT
- Title: Colorectal cancer survival prediction using deep distribution based
multiple-instance learning
- Authors: Xingyu Li, Jitendra Jonnagaddala, Min Cen, Hong Zhang, Xu Steven Xu
- Abstract summary: We develop a distribution based multiple-instance survival learning algorithm (DeepDisMISL)
Our results suggest that the more information about the distribution of the patch scores for a WSI, the better is the prediction performance.
DeepDisMISL demonstrated superior predictive ability compared to other recently published, state-of-the-art algorithms.
- Score: 5.231498575799198
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Several deep learning algorithms have been developed to predict survival of
cancer patients using whole slide images (WSIs).However, identification of
image phenotypes within the WSIs that are relevant to patient survival and
disease progression is difficult for both clinicians, and deep learning
algorithms. Most deep learning based Multiple Instance Learning (MIL)
algorithms for survival prediction use either top instances (e.g., maxpooling)
or top/bottom instances (e.g., MesoNet) to identify image phenotypes. In this
study, we hypothesize that wholistic information of the distribution of the
patch scores within a WSI can predict the cancer survival better. We developed
a distribution based multiple-instance survival learning algorithm
(DeepDisMISL) to validate this hypothesis. We designed and executed experiments
using two large international colorectal cancer WSIs datasets - MCO CRC and
TCGA COAD-READ. Our results suggest that the more information about the
distribution of the patch scores for a WSI, the better is the prediction
performance. Including multiple neighborhood instances around each selected
distribution location (e.g., percentiles) could further improve the prediction.
DeepDisMISL demonstrated superior predictive ability compared to other recently
published, state-of-the-art algorithms. Furthermore, our algorithm is
interpretable and could assist in understanding the relationship between cancer
morphological phenotypes and patients cancer survival risk.
Related papers
- M2EF-NNs: Multimodal Multi-instance Evidence Fusion Neural Networks for Cancer Survival Prediction [24.323961146023358]
We propose a neural network model called M2EF-NNs for accurate cancer survival prediction.
To capture global information in the images, we use a pre-trained Vision Transformer (ViT) model.
We are the first to apply the Dempster-Shafer evidence theory (DST) to cancer survival prediction.
arXiv Detail & Related papers (2024-08-08T02:31:04Z) - SCMIL: Sparse Context-aware Multiple Instance Learning for Predicting Cancer Survival Probability Distribution in Whole Slide Images [9.005219442274344]
Existing methods for cancer survival prediction based on Whole Slide Image (WSI) often fail to provide better clinically meaningful predictions.
We propose a Sparse Context-aware Multiple Instance Learning framework for predicting cancer survival probability distributions.
Our experimental results indicate that SCMIL outperforms current state-of-the-art methods for survival prediction, offering more clinically meaningful and interpretable outcomes.
arXiv Detail & Related papers (2024-06-30T11:22:36Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - Hierarchical Transformer for Survival Prediction Using Multimodality
Whole Slide Images and Genomics [63.76637479503006]
Learning good representation of giga-pixel level whole slide pathology images (WSI) for downstream tasks is critical.
This paper proposes a hierarchical-based multimodal transformer framework that learns a hierarchical mapping between pathology images and corresponding genes.
Our architecture requires fewer GPU resources compared with benchmark methods while maintaining better WSI representation ability.
arXiv Detail & Related papers (2022-11-29T23:47:56Z) - MHAttnSurv: Multi-Head Attention for Survival Prediction Using
Whole-Slide Pathology Images [4.148207298604488]
We developed a multi-head attention approach to focus on various parts of a tumor slide, for more comprehensive information extraction from WSIs.
Our model achieved an average c-index of 0.640, outperforming two existing state-of-the-art approaches for WSI-based survival prediction.
arXiv Detail & Related papers (2021-10-22T02:18:27Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Whole Slide Images based Cancer Survival Prediction using Attention
Guided Deep Multiple Instance Learning Networks [38.39901070720532]
Current image-based survival models that limit to key patches or clusters derived from Whole Slide Images (WSIs)
We propose Deep Attention Multiple Instance Survival Learning (DeepAttnMISL) by introducing both siamese MI-FCN and attention-based MIL pooling.
We evaluated our methods on two large cancer whole slide images datasets and our results suggest that the proposed approach is more effective and suitable for large datasets.
arXiv Detail & Related papers (2020-09-23T14:31:15Z) - M2Net: Multi-modal Multi-channel Network for Overall Survival Time
Prediction of Brain Tumor Patients [151.4352001822956]
Early and accurate prediction of overall survival (OS) time can help to obtain better treatment planning for brain tumor patients.
Existing prediction methods rely on radiomic features at the local lesion area of a magnetic resonance (MR) volume.
We propose an end-to-end OS time prediction model; namely, Multi-modal Multi-channel Network (M2Net)
arXiv Detail & Related papers (2020-06-01T05:21:37Z) - A Systematic Approach to Featurization for Cancer Drug Sensitivity
Predictions with Deep Learning [49.86828302591469]
We train >35,000 neural network models, sweeping over common featurization techniques.
We found the RNA-seq to be highly redundant and informative even with subsets larger than 128 features.
arXiv Detail & Related papers (2020-04-30T20:42:17Z) - An Investigation of Interpretability Techniques for Deep Learning in
Predictive Process Analytics [2.162419921663162]
This paper explores interpretability techniques for two of the most successful learning algorithms in medical decision-making literature: deep neural networks and random forests.
We learn models that try to predict the type of cancer of the patient, given their set of medical activity records.
We see certain distinct features used for predictions that provide useful insights about the type of cancer, along with features that do not generalize well.
arXiv Detail & Related papers (2020-02-21T09:14:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.