Advancing Head and Neck Cancer Survival Prediction via Multi-Label Learning and Deep Model Interpretation
- URL: http://arxiv.org/abs/2405.05488v1
- Date: Thu, 9 May 2024 01:30:04 GMT
- Title: Advancing Head and Neck Cancer Survival Prediction via Multi-Label Learning and Deep Model Interpretation
- Authors: Meixu Chen, Kai Wang, Jing Wang,
- Abstract summary: We propose IMLSP, an Interpretable Multi-Label multi-modal deep Survival Prediction framework for predicting multiple HNC survival outcomes simultaneously.
We also present Grad-TEAM, a Gradient-weighted Time-Event Activation Mapping approach specifically developed for deep survival model visual explanation.
- Score: 7.698783025721071
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A comprehensive and reliable survival prediction model is of great importance to assist in the personalized management of Head and Neck Cancer (HNC) patients treated with curative Radiation Therapy (RT). In this work, we propose IMLSP, an Interpretable Multi-Label multi-modal deep Survival Prediction framework for predicting multiple HNC survival outcomes simultaneously and provide time-event specific visual explanation of the deep prediction process. We adopt Multi-Task Logistic Regression (MTLR) layers to convert survival prediction from a regression problem to a multi-time point classification task, and to enable predicting of multiple relevant survival outcomes at the same time. We also present Grad-TEAM, a Gradient-weighted Time-Event Activation Mapping approach specifically developed for deep survival model visual explanation, to generate patient-specific time-to-event activation maps. We evaluate our method with the publicly available RADCURE HNC dataset, where it outperforms the corresponding single-modal models and single-label models on all survival outcomes. The generated activation maps show that the model focuses primarily on the tumor and nodal volumes when making the decision and the volume of interest varies for high- and low-risk patients. We demonstrate that the multi-label learning strategy can improve the learning efficiency and prognostic performance, while the interpretable survival prediction model is promising to help understand the decision-making process of AI and facilitate personalized treatment.
Related papers
- Enhanced Survival Prediction in Head and Neck Cancer Using Convolutional Block Attention and Multimodal Data Fusion [7.252280210331731]
This paper proposes a deep learning-based approach to predict survival outcomes in head and neck cancer patients.
Our method integrates feature extraction with a Convolutional Block Attention Module (CBAM) and a multi-modal data fusion layer.
The final prediction is achieved through a fully parametric discrete-time survival model.
arXiv Detail & Related papers (2024-10-29T07:56:04Z) - Survival Prediction in Lung Cancer through Multi-Modal Representation Learning [9.403446155541346]
This paper presents a novel approach to survival prediction by harnessing comprehensive information from CT and PET scans, along with associated Genomic data.
We aim to develop a robust predictive model for survival outcomes by integrating multi-modal imaging data with genetic information.
arXiv Detail & Related papers (2024-09-30T10:42:20Z) - Deep State-Space Generative Model For Correlated Time-to-Event Predictions [54.3637600983898]
We propose a deep latent state-space generative model to capture the interactions among different types of correlated clinical events.
Our method also uncovers meaningful insights about the latent correlations among mortality and different types of organ failures.
arXiv Detail & Related papers (2024-07-28T02:42:36Z) - AdaMSS: Adaptive Multi-Modality Segmentation-to-Survival Learning for Survival Outcome Prediction from PET/CT Images [11.028672732944251]
Deep survival models based on deep learning have been widely adopted to perform end-to-end survival prediction from medical images.
Recent deep survival models achieved promising performance by jointly performing tumor segmentation with survival prediction.
Existing deep survival models are unable to effectively leverage multi-modality images.
We propose a data-driven strategy to fuse multi-modality information, which realizes adaptive optimization of fusion strategies.
arXiv Detail & Related papers (2023-05-17T04:56:11Z) - Time Associated Meta Learning for Clinical Prediction [78.99422473394029]
We propose a novel time associated meta learning (TAML) method to make effective predictions at multiple future time points.
To address the sparsity problem after task splitting, TAML employs a temporal information sharing strategy to augment the number of positive samples.
We demonstrate the effectiveness of TAML on multiple clinical datasets, where it consistently outperforms a range of strong baselines.
arXiv Detail & Related papers (2023-03-05T03:54:54Z) - Deep learning methods for drug response prediction in cancer:
predominant and emerging trends [50.281853616905416]
Exploiting computational predictive models to study and treat cancer holds great promise in improving drug development and personalized design of treatment plans.
A wave of recent papers demonstrates promising results in predicting cancer response to drug treatments while utilizing deep learning methods.
This review allows to better understand the current state of the field and identify major challenges and promising solution paths.
arXiv Detail & Related papers (2022-11-18T03:26:31Z) - SurvLatent ODE : A Neural ODE based time-to-event model with competing
risks for longitudinal data improves cancer-associated Deep Vein Thrombosis
(DVT) prediction [68.8204255655161]
We propose a generative time-to-event model, SurvLatent ODE, which parameterizes a latent representation under irregularly sampled data.
Our model then utilizes the latent representation to flexibly estimate survival times for multiple competing events without specifying shapes of event-specific hazard function.
SurvLatent ODE outperforms the current clinical standard Khorana Risk scores for stratifying DVT risk groups.
arXiv Detail & Related papers (2022-04-20T17:28:08Z) - Multi-objective optimization determines when, which and how to fuse deep
networks: an application to predict COVID-19 outcomes [1.8351254916713304]
We present a novel approach to optimize the setup of a multimodal end-to-end model.
We test our method on the AIforCOVID dataset, attaining state-of-the-art results.
arXiv Detail & Related papers (2022-04-07T23:07:33Z) - Multi-task fusion for improving mammography screening data
classification [3.7683182861690843]
We propose a pipeline approach, where we first train a set of individual, task-specific models.
We then investigate the fusion thereof, which is in contrast to the standard model ensembling strategy.
Our fusion approaches improve AUC scores significantly by up to 0.04 compared to standard model ensembling.
arXiv Detail & Related papers (2021-12-01T13:56:27Z) - STELAR: Spatio-temporal Tensor Factorization with Latent Epidemiological
Regularization [76.57716281104938]
We develop a tensor method to predict the evolution of epidemic trends for many regions simultaneously.
STELAR enables long-term prediction by incorporating latent temporal regularization through a system of discrete-time difference equations.
We conduct experiments using both county- and state-level COVID-19 data and show that our model can identify interesting latent patterns of the epidemic.
arXiv Detail & Related papers (2020-12-08T21:21:47Z) - MIA-Prognosis: A Deep Learning Framework to Predict Therapy Response [58.0291320452122]
This paper aims at a unified deep learning approach to predict patient prognosis and therapy response.
We formalize the prognosis modeling as a multi-modal asynchronous time series classification task.
Our predictive model could further stratify low-risk and high-risk patients in terms of long-term survival.
arXiv Detail & Related papers (2020-10-08T15:30:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.