Cross-modality Attention-based Multimodal Fusion for Non-small Cell Lung
Cancer (NSCLC) Patient Survival Prediction
- URL: http://arxiv.org/abs/2308.09831v2
- Date: Wed, 28 Feb 2024 03:43:00 GMT
- Title: Cross-modality Attention-based Multimodal Fusion for Non-small Cell Lung
Cancer (NSCLC) Patient Survival Prediction
- Authors: Ruining Deng, Nazim Shaikh, Gareth Shannon, Yao Nie
- Abstract summary: We propose a cross-modality attention-based multimodal fusion pipeline designed to integrate modality-specific knowledge for patient survival prediction in non-small cell lung cancer (NSCLC)
Compared with single modality, which achieved c-index of 0.5772 and 0.5885 using solely tissue image data or RNA-seq data, respectively, the proposed fusion approach achieved c-index 0.6587 in our experiment.
- Score: 0.6476298550949928
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cancer prognosis and survival outcome predictions are crucial for therapeutic
response estimation and for stratifying patients into various treatment groups.
Medical domains concerned with cancer prognosis are abundant with multiple
modalities, including pathological image data and non-image data such as
genomic information. To date, multimodal learning has shown potential to
enhance clinical prediction model performance by extracting and aggregating
information from different modalities of the same subject. This approach could
outperform single modality learning, thus improving computer-aided diagnosis
and prognosis in numerous medical applications. In this work, we propose a
cross-modality attention-based multimodal fusion pipeline designed to integrate
modality-specific knowledge for patient survival prediction in non-small cell
lung cancer (NSCLC). Instead of merely concatenating or summing up the features
from different modalities, our method gauges the importance of each modality
for feature fusion with cross-modality relationship when infusing the
multimodal features. Compared with single modality, which achieved c-index of
0.5772 and 0.5885 using solely tissue image data or RNA-seq data, respectively,
the proposed fusion approach achieved c-index 0.6587 in our experiment,
showcasing the capability of assimilating modality-specific knowledge from
varied modalities.
Related papers
- Survival Prediction in Lung Cancer through Multi-Modal Representation Learning [9.403446155541346]
This paper presents a novel approach to survival prediction by harnessing comprehensive information from CT and PET scans, along with associated Genomic data.
We aim to develop a robust predictive model for survival outcomes by integrating multi-modal imaging data with genetic information.
arXiv Detail & Related papers (2024-09-30T10:42:20Z) - Confidence-aware multi-modality learning for eye disease screening [58.861421804458395]
We propose a novel multi-modality evidential fusion pipeline for eye disease screening.
It provides a measure of confidence for each modality and elegantly integrates the multi-modality information.
Experimental results on both public and internal datasets demonstrate that our model excels in robustness.
arXiv Detail & Related papers (2024-05-28T13:27:30Z) - FORESEE: Multimodal and Multi-view Representation Learning for Robust Prediction of Cancer Survival [3.4686401890974197]
We propose a new end-to-end framework, FORESEE, for robustly predicting patient survival by mining multimodal information.
Cross-fusion transformer effectively utilizes features at the cellular level, tissue level, and tumor heterogeneity level to correlate prognosis.
The hybrid attention encoder (HAE) uses the denoising contextual attention module to obtain the contextual relationship features.
We also propose an asymmetrically masked triplet masked autoencoder to reconstruct lost information within modalities.
arXiv Detail & Related papers (2024-05-13T12:39:08Z) - Cross-modality Guidance-aided Multi-modal Learning with Dual Attention
for MRI Brain Tumor Grading [47.50733518140625]
Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly.
We propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading.
arXiv Detail & Related papers (2024-01-17T07:54:49Z) - XAI for In-hospital Mortality Prediction via Multimodal ICU Data [57.73357047856416]
We propose an efficient, explainable AI solution for predicting in-hospital mortality via multimodal ICU data.
We employ multimodal learning in our framework, which can receive heterogeneous inputs from clinical data and make decisions.
Our framework can be easily transferred to other clinical tasks, which facilitates the discovery of crucial factors in healthcare research.
arXiv Detail & Related papers (2023-12-29T14:28:04Z) - Incomplete Multimodal Learning for Complex Brain Disorders Prediction [65.95783479249745]
We propose a new incomplete multimodal data integration approach that employs transformers and generative adversarial networks.
We apply our new method to predict cognitive degeneration and disease outcomes using the multimodal imaging genetic data from Alzheimer's Disease Neuroimaging Initiative cohort.
arXiv Detail & Related papers (2023-05-25T16:29:16Z) - Ambiguous Medical Image Segmentation using Diffusion Models [60.378180265885945]
We introduce a single diffusion model-based approach that produces multiple plausible outputs by learning a distribution over group insights.
Our proposed model generates a distribution of segmentation masks by leveraging the inherent sampling process of diffusion.
Comprehensive results show that our proposed approach outperforms existing state-of-the-art ambiguous segmentation networks.
arXiv Detail & Related papers (2023-04-10T17:58:22Z) - RadioPathomics: Multimodal Learning in Non-Small Cell Lung Cancer for
Adaptive Radiotherapy [1.8161758803237067]
We develop a multimodal late fusion approach to predict radiation therapy outcomes for non-small-cell lung cancer patients.
Experiments show that the proposed multimodal paradigm with an AUC equal to $90.9%$ outperforms each unimodal approach.
arXiv Detail & Related papers (2022-04-26T16:32:52Z) - Deep Orthogonal Fusion: Multimodal Prognostic Biomarker Discovery
Integrating Radiology, Pathology, Genomic, and Clinical Data [0.32622301272834525]
We predict the overall survival (OS) of glioma patients from diverse multimodal data with a Deep Orthogonal Fusion model.
The model learns to combine information from MRI exams, biopsy-based modalities, and clinical variables into a comprehensive multimodal risk score.
It significantly stratifies glioma patients by OS within clinical subsets, adding further granularity to prognostic clinical grading and molecular subtyping.
arXiv Detail & Related papers (2021-07-01T17:59:01Z) - Multimodal fusion using sparse CCA for breast cancer survival prediction [18.586974977393258]
We propose a novel feature embedding module that derives from canonical correlation analyses to account for intra-modality and inter-modality correlations.
Experiments on simulated and real data demonstrate how our proposed module can learn well-correlated multi-dimensional embeddings.
arXiv Detail & Related papers (2021-03-09T14:23:50Z) - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion [71.87627318863612]
We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
arXiv Detail & Related papers (2020-02-22T14:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.