Deep Orthogonal Fusion: Multimodal Prognostic Biomarker Discovery
Integrating Radiology, Pathology, Genomic, and Clinical Data
- URL: http://arxiv.org/abs/2107.00648v1
- Date: Thu, 1 Jul 2021 17:59:01 GMT
- Title: Deep Orthogonal Fusion: Multimodal Prognostic Biomarker Discovery
Integrating Radiology, Pathology, Genomic, and Clinical Data
- Authors: Nathaniel Braman, Jacob W. H. Gordon, Emery T. Goossens, Caleb Willis,
Martin C. Stumpe, Jagadish Venkataraman
- Abstract summary: We predict the overall survival (OS) of glioma patients from diverse multimodal data with a Deep Orthogonal Fusion model.
The model learns to combine information from MRI exams, biopsy-based modalities, and clinical variables into a comprehensive multimodal risk score.
It significantly stratifies glioma patients by OS within clinical subsets, adding further granularity to prognostic clinical grading and molecular subtyping.
- Score: 0.32622301272834525
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Clinical decision-making in oncology involves multimodal data such as
radiology scans, molecular profiling, histopathology slides, and clinical
factors. Despite the importance of these modalities individually, no deep
learning framework to date has combined them all to predict patient prognosis.
Here, we predict the overall survival (OS) of glioma patients from diverse
multimodal data with a Deep Orthogonal Fusion (DOF) model. The model learns to
combine information from multiparametric MRI exams, biopsy-based modalities
(such as H&E slide images and/or DNA sequencing), and clinical variables into a
comprehensive multimodal risk score. Prognostic embeddings from each modality
are learned and combined via attention-gated tensor fusion. To maximize the
information gleaned from each modality, we introduce a multimodal
orthogonalization (MMO) loss term that increases model performance by
incentivizing constituent embeddings to be more complementary. DOF predicts OS
in glioma patients with a median C-index of 0.788 +/- 0.067, significantly
outperforming (p=0.023) the best performing unimodal model with a median
C-index of 0.718 +/- 0.064. The prognostic model significantly stratifies
glioma patients by OS within clinical subsets, adding further granularity to
prognostic clinical grading and molecular subtyping.
Related papers
- UNICORN: A Deep Learning Model for Integrating Multi-Stain Data in Histopathology [2.9389205138207277]
UNICORN is a multi-modal transformer capable of processing multi-stain histopathology for atherosclerosis severity class prediction.
The architecture comprises a two-stage, end-to-end trainable model with specialized modules utilizing transformer self-attention blocks.
UNICORN achieved a classification accuracy of 0.67, outperforming other state-of-the-art models.
arXiv Detail & Related papers (2024-09-26T12:13:52Z) - MMIST-ccRCC: A Real World Medical Dataset for the Development of Multi-Modal Systems [12.914295902429]
We introduce a real world multi-modal dataset called MMIST-CCRCC.
This dataset comprises 2 radiology modalities (CT and MRI), histopathology, genomics, and clinical data from 618 patients with clear cell renal cell carcinoma (ccRCC)
We show that even with such severe missing rates the fusion of modalities leads to improvements in the survival forecasting.
arXiv Detail & Related papers (2024-05-02T18:29:05Z) - Cross-modality Attention-based Multimodal Fusion for Non-small Cell Lung
Cancer (NSCLC) Patient Survival Prediction [0.6476298550949928]
We propose a cross-modality attention-based multimodal fusion pipeline designed to integrate modality-specific knowledge for patient survival prediction in non-small cell lung cancer (NSCLC)
Compared with single modality, which achieved c-index of 0.5772 and 0.5885 using solely tissue image data or RNA-seq data, respectively, the proposed fusion approach achieved c-index 0.6587 in our experiment.
arXiv Detail & Related papers (2023-08-18T21:42:52Z) - SEMPAI: a Self-Enhancing Multi-Photon Artificial Intelligence for
prior-informed assessment of muscle function and pathology [48.54269377408277]
We introduce the Self-Enhancing Multi-Photon Artificial Intelligence (SEMPAI), that integrates hypothesis-driven priors in a data-driven Deep Learning approach.
SEMPAI performs joint learning of several tasks to enable prediction for small datasets.
SEMPAI outperforms state-of-the-art biomarkers in six of seven predictive tasks, including those with scarce data.
arXiv Detail & Related papers (2022-10-28T17:03:04Z) - Integrative Imaging Informatics for Cancer Research: Workflow Automation
for Neuro-oncology (I3CR-WANO) [0.12175619840081271]
We propose an artificial intelligence-based solution for the aggregation and processing of multisequence neuro-Oncology MRI data.
Our end-to-end framework i) classifies MRI sequences using an ensemble classifier, ii) preprocesses the data in a reproducible manner, and iv) delineates tumor tissue subtypes.
It is robust to missing sequences and adopts an expert-in-the-loop approach, where the segmentation results may be manually refined by radiologists.
arXiv Detail & Related papers (2022-10-06T18:23:42Z) - RadioPathomics: Multimodal Learning in Non-Small Cell Lung Cancer for
Adaptive Radiotherapy [1.8161758803237067]
We develop a multimodal late fusion approach to predict radiation therapy outcomes for non-small-cell lung cancer patients.
Experiments show that the proposed multimodal paradigm with an AUC equal to $90.9%$ outperforms each unimodal approach.
arXiv Detail & Related papers (2022-04-26T16:32:52Z) - Multiple Time Series Fusion Based on LSTM An Application to CAP A Phase
Classification Using EEG [56.155331323304]
Deep learning based electroencephalogram channels' feature level fusion is carried out in this work.
Channel selection, fusion, and classification procedures were optimized by two optimization algorithms.
arXiv Detail & Related papers (2021-12-18T14:17:49Z) - A multi-stage machine learning model on diagnosis of esophageal
manometry [50.591267188664666]
The framework includes deep-learning models at the swallow-level stage and feature-based machine learning models at the study-level stage.
This is the first artificial-intelligence-style model to automatically predict CC diagnosis of HRM study from raw multi-swallow data.
arXiv Detail & Related papers (2021-06-25T20:09:23Z) - G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification [49.53651166356737]
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers.
We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data.
arXiv Detail & Related papers (2021-01-27T19:28:04Z) - Co-Heterogeneous and Adaptive Segmentation from Multi-Source and
Multi-Phase CT Imaging Data: A Study on Pathological Liver and Lesion
Segmentation [48.504790189796836]
We present a novel segmentation strategy, co-heterogenous and adaptive segmentation (CHASe)
We propose a versatile framework that fuses appearance based semi-supervision, mask based adversarial domain adaptation, and pseudo-labeling.
CHASe can further improve pathological liver mask Dice-Sorensen coefficients by ranges of $4.2% sim 9.4%$.
arXiv Detail & Related papers (2020-05-27T06:58:39Z) - A Systematic Approach to Featurization for Cancer Drug Sensitivity
Predictions with Deep Learning [49.86828302591469]
We train >35,000 neural network models, sweeping over common featurization techniques.
We found the RNA-seq to be highly redundant and informative even with subsets larger than 128 features.
arXiv Detail & Related papers (2020-04-30T20:42:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.